Discussion
AI may be making us think and write more alike
misterflibble: Subtly? I beg to differ. My team leader only communicates to me using his LLM and so his "thoughts" are not his own!
ModernMech: AI doesn't have to be conscious or sentient to take over, all that needs to happen is for politicians, law enforcement, journalists, educators etc. to uncritically parrot everything it outputs. The military is already using AI to make targeting decisions. If they just go with whatever the AI says to strike, then AI is already fighting our wars.
trollbridge: As a bonus, mistakes can be blamed on AI.
paganel: > contributed to the research, which was supported by funding from the Air Force Office of Scientific Research.I guess when they're not busying bombing train infrastructure in Iran they have some money left to give to some propagandizing about AI. Always try to stay on top of the game!
anizan: Social media is a tool for perpetuating monothought
rdevilla: This state of affairs presages the advent of a second dark age - one that will forever eclipse the era of radical openness & transparency that once served the software community for decades. Tips, tricks, life hacks and other expert techniques will once again be jealously guarded from the prying eyes of the LLM whok would steal their competitive advantage & replicate it at scale, until any possible information asymmetries have been arbitraged away. The development & secrecy of technique will once again become a deep moat as LLMs fall into local, suboptimal minima, trained on and marketed towards the lowest common denominator. The Internet, or at least, The Web, becomes a Dark Forest of the Dead Internet (Theory), in which humans fear of speaking out and capturing the attention of the LLM who would siphon their creative essence for more, ever more training data. Interaction contracts into small meshes of trusted, verifiably human participants to keep the tides of spamslop at bay. Quasi-monastic orders that still scribe with pen and paper emerge, that believe there is still value in training and educating a human mind and body.- Unknown, 2026
krige: For many that's not a bonus, that's the goal. Consequence-free life ahoy.
misterflibble: The scary thing is that AI decision making has been infiltrating society for decades as an unseen entity.
downboots: It's not explanation — it's relabeling. Why it matters:
axpvms: You're absolutely right
danielbln: There were no "dark ages", that's the same common wisdom blunder like "in the middle ages everybody was dressed in drab grey clothing, ate gruel and walked through mountains of poop everywhere". It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today. It was by no means a time of standstill.
Brendinooo: I would imagine a similar critique was leveled at the written word when it was starting to supplant oral cultures.
stared: You are absolutely right!
mhl47: Social Media creates distinctive Filter Bubbles. A dominant LLM company (or multiple aligned ones) create one way of thinking.
uncanny2: I have made an observation that others have not discussed, that the real gem of our collective LLM experience is the proper documentation of “skills.”Am I the only one who has noticed that the proper documentation of skills we do for LLMs after so many decades of neglecting junior and mid level roles are the real work?We carefully explain to our LLMs policies, procedures, and practices which for generations before we have vaguely arbitrarily and ambiguously expected each human role to “figure out” for themselves?Simply as a catalog of expectations our experiences have been valuable, apart from the “automated” aspects the LLms provide.
SecretDreams: I would be looking for another job.I'm fine with using LLMs as coding tools. But I find it deeply offensive when someone is very explicitly using them to communicate with me.Communication is such a deeply human experience. It lets people feel each other out, and learn things beyond just the words being said. To have that filtered out by an LLM is just disgraceful.
pixl97: Fun and games until the AI decides extincting us is worth it.
eru: Well, Plato's sock puppet Socrates famously opposed writing with pretty much these arguments.
plastic-enjoyer: No, he did not and it would be good if people would have _actually_ read Plato's Phaedrus before regurgitating the same nonsense every time someone has a critical perspective on LLM writings.
jerrygarcia: I often wonder if the popularity of LLMs among company executives is that they are the perfect yes men.They rarely disagree with any idea or proposal, providing a salve for the insecurities of their users.
davebren: I was listening to one of Altman's more recent interviews and it sounded like he himself has LLM induced psychosis.
r_lee: I remember him tweeting about how he can "feel the AGI" when speaking to GPT
sumeno: Good luck finding a company that doesn't have these people if LLMs are used
avaer: Just because thoughts are translated doesn't mean they are consumed in the process.However I don't doubt many "team leaders" can and should be replaced with LLMs.
beached_whale: This is one of my fears with this, losing ones voice. Everyone's expression distilled to the mean. This has ramifications in things like recognizing if a person is who they say they are too. At least currently, it is punished/shunned to sound like an LLM, but it's well within reason to see that shift to individuality being penalized.
misterflibble: I think corporations will start penalizing first, they're already doing that to some extent at my work because they want their in-house agents to only review our PRs.
jessep: Yeah, I’ve notice that people have started to sound like LLMs even when the LLMs aren’t writing for them. Not stupid people. Not lazy people. Some of the smartest people I know —- I can’t figure out how to use an em dash here, but you get the point.
sobiolite: Human communication and reasoning is the end result of billions of years of evolution. I'd be very surprised if LLMs can fundamentally alter it in a few years.When considering phenomenon like these, I think people seriously underestimate what I'd call the "fashion effect". When a new technology, medium or aesthetic appears, it can have a surprisingly rapid influence on behaviour and discourse. The human social brain seems especially susceptible to novelty in this way.Because the effects appear so fast and are often so striking, even disturbing, due to their unfamiliarity, it is tempting to imagine that they represent a fundamental transformation and break from the existing technological, social and moral order. And we extrapolate that their rapid growth will continue unchecked in its speed and intensity, eventually crowding out everything that came before it.But generally this isn't what happens, because often what a lot of what we're seeing is just this new thing occupying the zeitgeist. Eventually, its novelty passes, the underlying norms of human behaviour reassert themselves, and society regresses to the mean. Not completely unchanged, but not as radically transformed as we feared either. The new phenomenon goes from being the latest fashion to overexposed and lame, then either fades away entirely, retreats to a niche, or settles in as just one strand of mainstream civilisational diversity.LLMs will certainly have an effect on how humans reason and communicate, but the idea that they will so effortlessly reshape it is, in my opinion, rather naive. The comments in this thread alone prove that LLM-speak is already a well-recognised dialect replete with clichés that most people will learn to avoid for fear of looking bad.
drtz: This could also be explained by the frequency illusion:https://en.wikipedia.org/wiki/Frequency_illusion
ori_b: Knowing people have gone full "LLM-brain", it's not subtle.
robofanatic: Well, in few years not sure I will know how to think any more. If I am stuck on something I just ask the LLM and get the solution. While this shortcut sometimes saves me a ton of time and headaches, I miss that long route of thinking and getting to a solution myself. Maybe in future we will have gyms for brain workouts… I don’t know
everdrive: So too did the printing press. Again, this is not a "something similar has happened in the past, therefore this is nothing new" sort of comment.This is quite new, however this outcome was totally unavoidable -- once methods of communication become widespread and centralized it is impossible for them not to impact language and thought.
nidnogg: Guilty as charged. In my mind, when I'm insecure about a response or if I don't have enough expertise in the topic at hand I end up running it through an LLM. Lately I've been really trying harder to keep my original ideas as much as possible. I'm seeing a bit of an improvement, but still early to tell
giancarlostoro: English is not my first language, but when I started using Firefox with the built-in spell correction, I firmly believe my ability to spell words went drastically up. My grammar is stiff iffy, like I'm pretty sure I do comma splices everywhere, but at least most people can understand what I say now compared to when I was 13 and on the internet.If there was a "gramma nazi" teenie tiny LLM with a total focus on English grammar only, and you baked that into every browser, I feel like my grammar would improve slightly. Word does it to an extent, but I don't use Word nearly enough for it to be meaningful. Firefox text spell checking was on 98% of the things I used online.
Joel_Mckay: Some play this everyday, as vocabulary will improve in time =3https://play.freerice.com
jeffwask: Take a community with AI moderation like Reddit, I've been a participant for years. With the recent push to AI autocorrect and moderation, you can see the changes in language. New words, new ways of speaking, unconsciously editing yourself because you don't want to draw the eye of the bot. It doesn't feel subtle. It feels Orwellian.
RobotToaster: It's particularly egregious on youtube, where people frequently use words like "unalived" or "self-deleted" instead of murder or suicide, lest they incur the wrath of the almighty algorithm.
davebren: This is my current fear, even if I choose not to use it if everyone around me does their way of speaking is all going to become more chatbot-esque. It already seems to be transferring to people its false sense of confidence, and its lack of reasoning ability. The corporate demand to participate in this is something I can't do, the cost is our humanity.I guess one hope for luddites is that we can stay tethered by reading pre-LLM books and other content.
davebren: There's plenty of people communicating more with LLMs than humans right now, of course it's going to have an effect because our language and thought patterns are extremely adaptive to our environment. The communication system we are born with is extremely bare-bones/general so that it can absorb whatever language and culture we are born into.
npsomaratna: Somehow made me think of Warhammer 40k (maybe pre men of iron?)
plasticchris: It’s a recurring theme, see dune’s references to Samuel Butler.
break_the_bank: Wrote about this a while ago actually; I called it the Billion Steve problem - https://x.com/gyani1595/status/2034652087494090829
mpalmer: Think of all the things that took hundreds/thousands/millions of years to develop and mature, which humans have managed to destroy in relatively short order.Every 50 years we cycle out an entirely new batch of thinking humans. What cognitive legacy is it exactly that you think is going to be self-preserving?
nusl: https://not-an-llm.bearblog.dev/meat-based-llm-proxies/
misterflibble: That's terrific lol thanks for the link BTW!
SketchySeaBeast: I say this with a multiple decades-spanning love of the game and the lore, but Warhammer 40k is what you get when teenagers try to create something immediately after reading Dune.
misterflibble: Yes true! It's everywhere now!
embedding-shape: "running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it? Checking against an LLM then using your own voice feels completely fine, just another type of validation before you share something, but if you actually let the LLM rewrite what you say, then I feel like that's beyond "running it through an LLM", it's basically letting the LLM write your text for you instead of just checking/validating.
misterflibble: Yes checking and validation is one thing, but there are several engineers in my area that only communicate using agent copy paste. I challenged one fellow about that and he was furious!
incomingpain: The LLM people call it "safety" but in reality its censorship and conformity. Yet, it's trivial to get them to talk about how to make a bomb or whatever. It's mostly political in nature.https://www.trackingai.org/political-testYou dont accidentally end up entirely left wing libertarian.
dfxm12: "running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it?The article seems to imply this is what is happening, as writing style converges towards LLM's style. You can call it what you want, but the important bit is that this is how it appears that LLM's are being used.Checking against an LLM then using your own voice feels completely fineWhy use an LLM? If you're worried about style, starting with your own voice is more efficient. If you're worried about facts, looking something up in a primary source is best, and is probably cheaper on a few axes, especially if you need to check/validate anyway...
normalaccess: There is a reason Coke spends ~ 5 billion dollars worldwide on sugar water... It works.Monkey see monkey do. Simple as that.
thatjoeoverthr: I've been calling them "meat condoms". In the workplace, it's one or two warnings before completely ejecting them. On social media, instant block.
nusl: Seems to be becoming more common, even for folks that are otherwise quite pleasant to deal with. Perhaps social and workplace pressures causes people to opt for it, much like LinkedIn is a cesspool of bullshit
rimliu: how exactly the printing press did that?
everdrive: It's actually an interesting topic, the printing press fostered a great normalization of the English language that was not previously present.https://academic.oup.com/book/41217/chapter-abstract/3506879...https://en.wikipedia.org/wiki/English-language_spelling_refo...
SketchySeaBeast: That seems to me to be an example where the language is forced to change but the thoughts remain the same. Sure, people are using the "safe" terms, but they're using them to continue to subvert the rules, not to bow to them.
jerf: "say that AI developers should incorporate more real-world diversity into large language model (LLM) training sets,"Are you kidding me?How much more "real-world diversity" could they possibly incorporate into the models than the entire freaking Internet and also every scrap of text written on paper the AI companies could get a hold of?How on Earth could someone think that AIs speak like this because their training set is full of LLM-speak? This is transparently obviously false.This is the sort of massive, blinding error that calls everything else written in the article into question. Whatever their mental model of AI is it has no resemblance to reality.
exe34: The problem isn't the diversity in the training set - the problem is that the method by design picks the average.
SketchySeaBeast: On the contrary, the printing press enabled people to quickly spread new ideas. Protestantism was enabled by it. That was quite the schism in thinking.
everdrive: Definitely agreed -- I wasn't precise enough when making my point, but I think your point is absolutely correct.
amelius: Just crank up the temperature.
lamasery: A ton of "incorrect" comma usage isn't even (historically) wrong, actually, it's just currently unfashionable.There was a reaction in the last century against poor writers with poor taste over-using punctuation and writing ugly, long sentences. The result was stern advice to students to eliminate punctuation and cut sentences up into tiny bits. These same students came out of this process believing this was correct writing, not a straight-jacket put on them to keep them from hurting themselves. They unthinkingly cite Hemingway and borrow his clout, I suppose judging almost all writing before Hemingway and most after him, up until the 80s or 90s, as "bad" even when it's the work of masters. They blame the author when their stunted literacy (learning to write can hardly be separated from learning to read, at least at the more-advanced end of "to read") leaves them, as adults, struggling with texts once meant for children.