Discussion
The Uncanny Valley and the Rising Power of Anti-AI Sentiment
noosphr: I wasn't expecting the next culture war to be about AI.The AI equality thanksgiving dinners fights in 20 years are going to be wild.>I'm not a bigot I support trans rights. But clankers aren't welcome in my house.>> OK Millennial. I'm a cyborg with 95% of my brain running in a private server.
Forgeties79: Any discussion about AI/LLM’s/etc is incredibly complicated. I could go on and on elaborating on this, but I’m just going to leave my preface at that.There is one thing I found to be true over and over again no matter what the anchor point is for the conversation, no matter the context, no matter someone’s sentiment, etc: nobody likes to have their time wasted.LLM’s are incredibly useful for cutting corners. It makes it very easy to waste people’s time. No matter how useful they are, no matter the use case you have found, no matter the integration, people keep encountering bad search results and people sending them clearly LLM-generated work that wastes their time.Unless somebody comes up with a cure for that, there will always be a significant portion of the population that is hostile to LLM’s - and rightfully so! No promise of productivity will overcome that.TL;DR: the biggest problem with LLM’s is that it enables people to waste other people’s time.
jmathai: Both things can be true.AI can help you in the near term and harm you in the long term.I think the more people use AI the more their view shifts from the former to the latter.
zarzavat: AI is a technology with the explicit end-goal of substituting energy for people. It's not intended to benefit the common man, it's intended to benefit the capital owning classes.
vjvjvjvjghv: [delayed]
chrisjj: [delayed]
yellow_postit: This is a very US centric view in part because of the low confidence in the American government to be able to effectively regulate negative impacts.https://hai.stanford.edu/ai-index/2026-ai-index-report/publi...
blululu: >> Public hostility toward AI now looks stronger than ordinary skepticism toward a new technology. People have reasons for that response, including fraud, misinformation, privacy invasion, concentration of power, and job displacement. Job displacement carries its own emotional weight because it threatens status, livelihood, and social usefulness, which gives the fear an existential edge. >>This essay explores why anti-AI sentiment may be gaining force.The article lists off all the obvious and credible reasons why people are opposed to AI in the intro paragraph. It then spends the next 25 paragraphs advancing a very clever pet theory derived psychology about what might be going on here. While interesting in its own right, the article misses the obvious concerns that it raised in the intro paragraph.
klik99: The company whos blog it is is "AI-assisted clinical documentation" - I feel this is an attempt to explain anti-AI sentiment as an unreasonable aversion to AI rather than the real reasons for anti-AI sentiment. There's a weird trend in the AI industry to pathologize people who don't like AI.
ang_cire: As a millennial, I will be the first to run my brain on my toilet homelab servers.
furyofantares: The usual left/right haven't managed to pick a side and consume this. Maybe they still will. I don't know who would get which side though.A cynical part of me says it's something everybody can hate. I can see both sides taking that. I can't see either side embracing it as part of the left or right identity.Maybe more it's a conflict between those with power and those without. Like return to office, or open offices, or cubicles before that, and probably many other things back to the luddites and earlier.
forgetfreeman: I find it offensive that comments that appear to be legitimate additions to the conversation are downvoted into oblivion and then flagged without even a single response to suggest where the author of the comment in question was in error. This is definitely not what I would expect to see on an ostensibly neutral platform that claims to be dedicated to technical discussion of issues on their merits.
lizknope: Every PC gamer / hardware tech review forum is full of anti-AI hatred.People want to buy a new GPU, add RAM, a new SSD, or hard drive. All of these have doubled or quadrupled in price in just a few months.Then there are reddit threads every day where I think 30% of the original posts and comments are AI generated spam. If I see a post for emdashes or anything that ends by asking for "thoughts?" I just down vote and report as spam. I want to interact with actual humans not AI bots.Then we see posts about AI data centers and electricity use which will lead to higher electric bills for ordinary people if demand is higher than supply.This is ignoring all the stuff about people losing jobs.So why should the video game playing population or even the general population be in support of AI? Of course it has uses but there are so many negatives right now it is easy for me to understand why people are already sick of it.
TurdF3rguson: I've been thinking the opposite. It sucks to be in the generation of workers that are displaced by AI. It's going to be great to be in the generation where work just isn't something that humans are expected to do.
chrisjj: [delayed]
heyalexhsu: Because you can get Hollywood visual effects from Nvidia DLSS 5! /s
JoshTriplett: There is a dead comment at https://news.ycombinator.com/item?id=47829571 that deserves to be not-dead, and read and understood.
chrisjj: [delayed]
shmerl: Add to it the disgusting level of forcing it on people by those who want to profit from it. Surely people will like it.
JoshTriplett: It's not "weird", it's hostile marketing. "How do we overcome the negative sentiment we see as an obstacle in order to sell to people who don't want it, or people who will be around people who don't want it?" It's an entirely natural, commonplace, awful thing. See also "how do we market cigarettes" and "how do we maximize social media engagement" (the latter being one reason outrage gets amplified).
ronsor: There was always content that wastes people's time because people have always confused length and complexity with comprehensiveness and depth.These were always poor proxy metrics for "good content," but in a lot of environments, especially professional ones, they were how work was evaluated. Naturally others used LLM to generate content that satisfies these metrics.The slop epidemic is a consequence of what people erroneously valued for so long. Now they have it, and it's meaningless, and even if most of it was always meaningless, they can't easily tell the difference between "fluff with something meaningful" and "fluff with only fluff" anymore.
Forgeties79: That is my point. It doesn’t matter what they’re built for or the ideal use is. People are using them to waste other people’s time.
SpicyLemonZest: I don't think this link supports your claim. All English speaking countries in the "Opinions about AI by country" chart have 60%+ people who are nervous, in every country but Japan at least 40% of people are nervous, and there's no obvious correlation with the "trust in government regulation" data further down.
rolph: [delayed]
tw04: The right is desperately trying to figure out how to get their commoners onboard but the only narrative they’ve got so far is: this will help us kill “terrorists. That story is ringing pretty hollow when the orange one campaigned on no new wars and they’re trying to blame AI for their decision to bomb an Iranian school.
gondar: AI is usefulA small group of people are going to acquire immerse wealth and power from this new technology80% of everyone else will be facing possibility of losing jobs or reduced income, if they still have jobThis will be another rust belt decades, but for white collar jobs and costal states
klik99: I find it weird because I've seen traces of it before in people who believed in the singularity 20 years ago, people who really believed that anti-AI was pathological. Back then the stakes didn't seem as real and immediate as now, and now you can see it on pro-AI reddit subs. But I agree that language and attitude is co-opted for marketing purposes, for example last year when there was a lot of talk about doomerism.
gruez: >Then we see posts about AI data centers and electricity use which will lead to higher electric bills for ordinary people if demand is higher than supply.That hasn't really played out in reality. The correlation between datacenter capacity growth and electricity price growth is poor.https://www.economist.com/content-assets/images/20251101_USC...
Aerroon: That's what the whole UBI thing was about though. People did see this coming and wanted to preempt it. I'm not sure whether it would've worked, but people did try to come up with solutions for this transition period.
gdulli: Everyone wants to use AI to create work and no one wants to consume or be downstream of AI created work.
Forgeties79: Bingo
JoshTriplett: Yeah. There are many critical safety concerns, and somehow people with vested interests in AI have tried to spin that as "oh, it's astroturf marketing by the AI companies to make it seem like their products are dangerous and therefore powerful, just ignore it". Which is simultaneously trying to promote the products and dismiss the opposition. It's infuriating, and blatantly wrong, but it's also a natural consequence of "it is difficult to get a man to understand something when his salary depends upon his not understanding it"[1].[1] https://quoteinvestigator.com/2017/11/30/salary/
moron4hire: We are never going to live in a society that doesn't expect people to work. There may not be enough work for half the population, but people will still be expected to work to live. We already live in a society that could feed every last poor person and we still choose not to, cuz "but muh tax dollars!"
TurdF3rguson: I mean, assuming we don't hit some limit with AI, we're going to get to the point where the best way humans can affect productivity is to just get out of the way.
TurdF3rguson: There's still plenty of time to figure it out. You're making it sound like it's already too late.
enraged_camel: It is AI-generated, which is why it got killed.
eudamoniac: Nah, AI always has its em dashes with no spaces around them.
zarzavat: Replacing workers in specific industries is one thing. AI is trying to replace people in general. Expecting new jobs to pop up is misunderstanding the goal of this technology which is to eliminate jobs.
sltkr: How have you determined that? Are you basing it solely on the em-dash (which is trivial to avoid if you want to generate AI comments)?
The public was much more likely to say AI would harm them than benefit them.
userbinator: The public was much more likely to say AI would harm them than benefit them.There are so many things called "AI" these days, that studies like this are basically meaningless. I think (hope) most people's views can't be reduced to a single binary question.
WastedCucumber: I think these studies aren't meaningless at all, but the fact that "AI" is a loosely used term means that many people might view even more simple ML methods with skepticism, as opposed to just, say, chat-like LLM tools.
userbinator: There's also a difference between using AI as a tool for creation, and as an oracle for truth.
moron4hire: I'm still waiting for the last ML movement to revolutionize business intelligence. Back when regression models were going to give us all forecasting. Turns out garbage in still equals garbage out and there still aren't any silver bullets. The organizations that couldn't get their act together to collect good data about their businesses for traditional analysis methods to work are shock-faced that model-overfitting writ large isn't saving them from their doofus C-suites.
chrisjj: [delayed]
moron4hire: You can't regress to the mean and call it creation. LLMs don't make novel content. This is why all the people using AI-summarizers to understand their boss's AI-expanded micromanaging emails aren't getting anything new done. Anti-compression is going to accelerate climate change.
userbinator: I am talking about generative AI, not asking LLMs for answers.