Discussion
jemmyw: The question it raises is if this is the fake surge, the one we see, what is the real one we don't see? Renewable energy comes to mind. Robotics too but maybe that's too tied up with AI.
Zealotux: I'm currently looking for sort of niche clothes for an event and it's the first time I had to give up on buying online because of the sheer amount of AI-generated pictures. Going to a physical store was just a much better experience, I can't recall the last time this happened, almost all sellers on Etsy are using AI for their pictures.
zemo: full disclosure I work at Whatnot but that sort of thing is a large part of the appeal of Whatnot to me, that people are showing off the stuff live on stream and you can ask questions about it
bearjaws: I could totally see it, recently there has been a social club opened near me and it has 100+ people attending weekly. All younger, 20-30 year olds in their early career.Separately, I have a local camera repair shop and my friend told me its 2 months backlog to get your film based camera worked on.Ultimately if the deal we get online is infinite tracking, infinite scrolling and infinite enshittification, real life start to sound a whole lot better.
Forgeties79: Going to the local movie rental shop with my kids is the highlight of my week
neals: I had to code something on a plane today. It used to be that you couldn't get you packages or check stackoverflow. But now, I'm useless. My mind has turned to pudding. I cannot remember basic boilerplate stuff. Crazy how fast that goes.
embedding-shape: Really? How long you've been a developer? I've been almost exclusively doing "agent coding" for the last year + some months, been a professional developer for a decade or something. Tried just now to write some random JavaScript, C#, Java, Rust and Clojure "manually" and seems my muscle memory works just as well as two years ago.I'm wondering if this is something that hits new developers faster than more experienced ones?
awongh: I think it's clear to me that AI will be both things:1) as in the article it's a contraction of work- industrialization getting rid of hand-made work or the contraction of all things horse-related when the internal combustion engine came aroundbut- it will also be2) new technologies and ideas enabled by a completely new set of capabilitiesThe real question is if the economic boost from the latter outpaces the losses of the former. History says these transitions aren't easy on society.But also, the AI pessimism is hard to understand in this context- do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?
HWR_14: > I gladly pay the (modest/token) late fees to help keep them open at this pointKeeping movies longer and paying late fees may be hurting them more than helping them. It's entirely possible that the late fees are underpriced to avoid scaring away customers. New customers going away disappointed they movie they want wasn't returned on time hurts them more than your late fees help.
Forgeties79: Not keeping them on purpose, I’m just not sweating the fee because I’m happy to pay them.Additionally, the odds that my kids are holding on to exactly what somebody else wants in that timeframe is very small. It’s a small shop within a larger co-op situation with a modest following and pretty substantial stock. I know for instance we’ve never had an issue of wanting something that was rented.Has it happened? Maybe. But the fees I’ve paid probably net positive against that rare instance. They aren’t open half the week so I can’t return them once Monday passes for several days anyway. Owner certainly hasn’t expressed concern and has even waived the fee before because clearly it’s of little consequence.
a-dub: this perez model thing completely misses the communications revolutions of the telegraph, radio and television not to mention demonopolization of bell.> Then came AI, revealing new dynamics. ChatGPT’s breakthrough didn’t come from a garage startup but from OpenAI,i thought the transformer and large language models came from google research.> There’s also social pushback—in the UK the campaigns against big ringroad schemes started in the late 1960s and early 1970s. And perhaps we’re seeing some of that about AI. The U.S. map of local pushback against data centres from Data Center Watch covers the whole of the country, in red states and blue. People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.the us had the highway revolts. in most cities where the revolts succeeded it is widely heralded today as a success.the data center hate is interesting. i think many people are just learning what data centers are. but that said, they've come to represent something different in recent years. previously they were part of the infrastructure that made industry hum, now public messaging from tech leaders and academics is along the lines of "this is how your livelihood is going to be replaced" while the institutions that are supposed to provide any sort of backstop are being dismantled or slashed to pieces by crazypants trumpist politics. i think focusing the energy on the tangible like mundane buildings is interesting, but the hate makes a lot of sense.addressing the core thesis, i'd argue that ai is not the next step in the 70s digital technological wave (especially considering the future of ai compute is probably hybrid digital-analog systems), but rather is something fundamentally new that also changes how technology interacts with society and how economics itself will function.previous systems helped, these systems can do. that's a fundamental change and one that may not be compatible with our existing economic systems of social sorting and mobility. the big question in my mind is: if it succeeds, will we desperately try to hold onto the old system (which essentially would be a disaster that freezes everyone in place and creates a permanent underclass) or will we evolve to a new, yet to be defined, system? and if so, how will the transition look?
dodu_: > do people really believe no novel things will be unlocked with this tech?Yes. It's a mostly shitty but very fast and relatively inexpensive replacement for things that already exist.Give your best example of something that is novel, ie isn't just replacing existing processes at scale.It's been 3 and a half years now since the initial hype wave. Maybe I genuinely missed the novel trillion dollar use case that isn't just labor disruption.
dleslie: All skill degrade with disuse. For example, here in Canada we have observed a literacy and numeracy skills curve that peaks with post-secondary education and declines with retirement.[0]Use it or lose it, as it were.0: https://www150.statcan.gc.ca/n1/daily-quotidien/241210/dq241...
intended: It a side effect of using AI.People using AI for tasks (essay writing in the MIT study linked below) showed lower ownership, brain connectivity, and ability to quote their work accurately.> https://arxiv.org/abs/2506.08872There was a MSFT and Carnegie Mellon study that saw a link between AI use, confidence in ones skills, confidence in AI, and critical thinking. The takeaway for me is that people are getting into “AI take the wheel” scenarios when using GenAI and not thinking about the task. This affects people novices more than experts.If you managed to do critical thinking, and had relegated sufficient code to muscle memory, perhaps you aren’t as impacted.
order-matters: i think your environment is a big role. with Ai you can kind of code first, understand second. without AI if you dont fully understand something then you havent finished coding it, and the task is not complete. if the deadline is too aggressive you push back and ask for more time. with AI, that becomes harder to do. you move on to the next thing before you are able to take the time to understand what it has done.i dont think it is entirely a case of voluntary outsourcing of critical thinking. I think it's a problem of 1) total time devoted to the task decreasing, and 2) it's like trying to teach yourself puzzle solving skills when the puzzles are all solved for you quickly. You can stare at the answer and try to think about how you would have arrived at it, and maybe you convince yourself of it, but it should be relatively common sense that the learning value of a puzzle becomes obsolete if you are given the answer.
ori_b: We're racing to build hell.
butlike: So now the ancillary question from your example is: "Is hand-spun cotton better than industrialized polyester?"
techteach00: I sort of agree with the premise of the article. I ask myself, did more non-technical people pick up AI chat bots when they were invented than picked up personal computers in the late 70s/early 80's? I think probably. From my conversations with others.
Forgeties79: Part of this is because we aren’t paying the actual cost of these chatbots. If ChatGPT wasn’t essentially free for casual users then we’d definitely see a much smaller/slower adoption rate. I wonder if a single person using them, even paying for tokens, isn’t substantially subsidized. Probably not but I’m speculating.If 3D printers could’ve given usage away for years directly in our homes then I bet we would’ve seen wider adoption there too.
zozbot234: Chat bots can run on your local hardware these days, even mobile phone hardware. That's effectively free.
hansmayer: > random JavaScript, C#, Java, Rust and Clojure "manually"Right, sounds very credible to me. What did you write, an addition function in each of those?
jerf: AI is in spitting distance of being able to do that too.
geerlingguy: I sometimes wonder if the random people sitting there hawking a pile of Amazon goods that pops up after every Amazon purchase are already AI.
bandrami: This conversation keeps missing me because I don't think I've typed out boilerplate in like 20 years.Were people actually physically typing every character of the software they were writing before a couple of years ago?
jasonlotito: Others have addressed other aspects of this, but I want to address this:> I cannot remember basic boilerplate stuff.I don't know exactly what you mean by boilerplate stuff, but honestly, that's stuff we should have automated away prior to AI. We should not be writing boilerplate.I'd highly encourage you to take the time to automate this stuff away. Not even with AI, but with scripts you can run to automate boilerplate generation. (Assuming you can't move it to a library/framework).
bandrami: So many use cases for LLMs I've read leave me asking "did none of you have a working text editor?"
XCSme: I guess writing code is now like creating punch-cards for old computers. Or even more recently, as writing ASM instead of using a higher level language like C. Now we simply write our "code" in a higher language, natural language, and the LLM is the compiler.
bilekas: > Now we simply write our "code" in a higher language, natural language, and the LLM is the compiler.No we don't and we never should actually, compilers need to be deterministic.
jmstfv: tangentially related, but as someone who built multiple internet businesses -- mostly unsuccessful, some mildly successful -- I barely have any new ideas to work on.I don't know if this is the effect of relying on AI too much in my day-to-day work or leading a more monotonous life as of late, but I'm sure I'm not the only one. Lots of ideas that I could have built before LLMs took over now seem trivial to build with Claude & friends.
Cilvic: I can relate to this, in the past I felt like I could write down pages of projects to try if only I had time. Now my mind immediately goes towards "do I want to manage this long term after the initial spark".
DougN7: That made me wonder, honestly, if AI can build it, could AI manage it too?
cowl: personal computers in early 70s/80s were a considerable investment for little to no gain and especially no force pushed FOMO.it costs you nothing to install/adopt an AI chat bot and it's being force fed to everyone at head turning loss in order to justify the push.
AndroTux: Wait, I just deleted prod. You're absolutely right, that shouldn't have happened. My mistake.
fendy3002: In my 7th years of professionally programming node, not even once I remember the express or html boilerplate, neither is the router definition or middleware. Yet I can code normally provided there's internet accessible. It's simply not worth remembering, logic and architecture worth more IMO
malfist: Einstein famously refused to learn people's phone numbers, stating that he could look them up in the phonebook whenever he needed it.I don't think there is that much value in memorizing rarely used, easily looked up information.
jenniferhooley: I think that most people are pretty short-sighted about the utility cases right now (which is understandable given the negative feelings about a lot of what's currently going on).There are a lot of really useful things that were impossible before. But none of these use cases are "easy," and they all take years of engineering to implement. So, all we see right now are trashy, vibe-code style "startups" rather than the actual useful stuff that will come over the years from experienced architects and engineers who can properly utilize this technology to build real products.I'm someone who feels very frustrated with most of the chatter around AI - especially the CEOs desperate to devalue human labor and replace it - but I am personally building something utilizing AI that would have been impossible without it. But yeah, it's no walk in the park, and I've been working on it for three years and will likely be working on it for another year before it's remotely ready for the public.When I started, the inference was too slow, the costs were too high, and the thinking-power was too poor to actually pull it off. I just hypothesized that it would all be ready by the time I launch the product. Which it finally is, as of a few months ago.
pixl97: With this said, a lot of people are likely worried about being eaten by whales when it comes to doing things with AI.It's kind of like dealing with Amazon, or any other company that has both compute and the ability to sell the kind of product you make.Said AI providers can sell you the compute to make the product, or they can make the product themselves with discounted compute and eat all the profits you'd make.
himata4113: Every time I see these I am thinking to myself: Is microsoft copilot a problem of implementation or the capability of the models?I have ZERO doubt that if you put people that haven't used a computer in front of one and you had copilot everywhere and I mean not the way it is now instead you're presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatbox rather than trying to figure out how to use a computer which is why I am not quick to discredit "microslop", they're most likely pivoting windows to how it will look like in the future.Obviously, the strongest argument here is that it should have been an entirely different product such as "Windows AI" where the entire system is designed around it. But if you look at their current implementation it's more of a copilot which is just there, letting you know it exists. Obviously not all of these features were thought through such as recall, that should have been dead and burried since it doesn't offer that much real value a magical box that takes in english sentences and does roughly what you want.At the end of the day it's a question if AI will/is doing more harm than good. AI has really only existed in this form for a little more than 3 years and really started shining since the advent of Opus 4.5. We went from having models producing more security vulnerabilities than one can count to fixing obscure human made ones and the capabilities will keep increasing (if anthropic is to be believed). We will enter an era where it will have 95%+ accuracy in doing what a typical computer user would want from AI and there's really nothing anyone can do to stop it.So my opinion is that AI will be the next big thing and it might spread way beyond what we can even imagine.I think that we will have things similar to non technical people that just talk on the phone with an AI agent to get a website done, register a domain and have a website done within a 1 hour phone call all for pennies while the AI has access to their financials, mail and other things. All of that is relatively possible today with the simple caviat of security and I do believe we have enough smart people in the world that can figure out how to make AI better at rejecting social engineering than 99% of humans.
chromacity: > I have ZERO doubt that if you put people that haven't used a computer in front of one ... presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatboxI don't know. We've been telling ourselves things like that about user interfaces for a long time. For decades, it was pretty much universally understood that everyone would prefer to talk to their computer instead of using a keyboard. Now that you can, no one really wants to. In fact, now that we can text / email / IM other people, we don't talk to them as much as before.One obvious problem with the interface you're proposing is that sometimes, it's easier to do the thing than to explain precisely what you want. For example, it takes much longer to ask ChatGPT what's the weather forecast for this week, and then read the flowery response, than to press Ctrl-N, "wea", enter, and see it at a glance in a consistent format with pictograms.
I_am_tiberius: Soon everyone will run local models for simple stuff like that.
dukky: I thought this comment was going the opposite way - previously no internet/googling but now you can run a local model and figure things out without the need for internet at all
wanderingstan: Mine as well. 2 years ago my mind was blown that I could code in a language I didn’t know (scala) while on a log train ride with no internet (Amtrak) using a local model on a laptop. Couldn’t believe it.
jimbokun: The staggeringly effective compression of LLMs is still under appreciated, I think.2 years ago you had downloaded onto your laptop an effective and useful summary of all of the information on the Internet, that could be used to generate computer programs in an arbitrarily selected programming language.
himata4113: You already know how to use a computer or a phone, but take someone who has never seen or used a smartphone, computer or a laptop. I think the story will be very different.
chromacity: I don't know. In a vacuum, if we prevent them from ever finding out that there's a faster way with less cognitive overhead? Sure.But in practice, people pick up stuff from each other. I'm old enough that learning to use the computer mouse needed to be a deliberate effort on my end. I never really had to "teach" that to my kids, they just picked it up naturally.I'm sure that you could've made an argument that it would have been more intuitive to do another sci-fi staple: just wave your hands around in the air to manipulate objects on some futuristic-looking translucent screen. But in practice... no one wants that because it might seem more intuitive but is less comfortable in the long haul.
vjvjvjvjghv: "Yes. It's a mostly shitty but very fast and relatively inexpensive replacement for things that already exist."Wouldn't that apply to most technological advances? Cars, computers, cell phones.
dodu_: Yes, but I'm not the one who introduced the "novel" constraint to the argument.e: Also I don't know that I'd strictly bucket these specific examples you gave as shittier versions, though I guess that's a matter of perspective.
elAhmo: Well, we are not paying for Gmail, Youtube, TikTok either, all sorts of other services that are free as well.
pixl97: Well, we are paying for it, but not directly with cash.
aworks: It was a long time ago but I attended a session by IBM at an OO conference. The speaker's claim was that the half-life of programming language knowledge was 6 months i.e. if not reinforced, that how fast it goes.I learned the Q array language five years ago and then didn't touch it for six months. I was surprised how little I remembered when I tried to resume.
sweezyjeezy: Well this is HN so a lot of us are pretty terrified of your 1). We went from 'you have a good job for the next couple of decades' to 'your job is at extreme risk for disruption from AI' in the space of like 5 years. Personally I have a family, I'm a bit old to retrain, but I never worked at a high-comp FAANG or anything so I can't just focus on painting unless my government helps me (note - not US/China). That's extremely anxiety-inducing, that a vague promise of novel new things does not come close to compensating.
Jcampuzano2: I'm 33 and I feel sort of lucky that I'll still potentially have time to retrain. I'm fully prepared to within the next 5 years or so (and potentially much less) I'll probably need to retrain into a trade or something to stay relevant in any sort of field.Many people claim its going to become a tool we use alongside our daily work, but its clear to me thats not how anybody managing a company sees it, and even these AI labs that previously tried to emphasize how much its going to augment existing workforces are pushing being able to do more with less.Most companies are holding onto their workforce only begrudgingly while the tools advance and they still need humans for "something", not because they're doing us some sort of favor.The way I see it unless you have specialized knowledge, you are at risk of replacement within the next few years.
SideburnsOfDoom: I firmly believe that Renewable energy, the Solar+battery+EV stack, not LLMs, really is the biggest technology transformation of our times. Renewable energy really is surging, just it's on a longer timeline and unlike LLMs, it doesn't benefit venture capitalists to hype it. In fact many existing sectors deliberately downplay it. But we are in the middle of it.Robotics? lights-out operations in automated factories are already a thing, so I don't know if they're the "next thing".mRNA vaccines? Sure, they're a huge medical advance. With great potential, in that area. But it's just an area.Space? Maybe, if we get past LEO, find something useful to do there, and don't succumb to Kessler syndrome.
pixl97: >Robotics? lights-out operations in automated factories are already a thing, so I don't know if they're the "next thing".Eh, I do think this is kind of underestimating the changes in robotics that are occurring. LLMs incorporated with other ML kernels extend the capabilities a long way. That and the amount of computing power now usable to train robotics is far far larger.
damnesian: Hard to understand, when essential human nature is so predictable? Sure, we will do novel things with it. But society in the main will use to it exploit labor. same as it ever was.
anthonypasq: are you under the impression life was better before capitalism?
sweezyjeezy: That's a false-dichotomy. Capitalism was good for artisanal workers before the industrial revolution, and then it became pretty goddamn bad for them. We're worried we're staring down the barrel of that right now - just saying 'well it was even worse before capitalism' does nothing for us.
falcor84: Maybe it's my memory issues, but I personally could never remember basic boilerplate. 30 years ago I would spend half of my time in Borland's help menu coupled with grepping through man pages. These days I use LLMs, including ollama when on a plane. I don't feel worse off.