Discussion
AI-Assisted Cognition Endangers Human Development
steve_adams_86: "Cognitive inbreeding" is an interesting (though maybe not entirely accurate) term for something I dislike a lot about LLMs. It really is a thing. You're recycling the same biases over and over, and it can be very difficult to tell if you don't review and distill the contents of your discourse with LLMs. Especially true if you're only using one.I do think there's a solution to this—kind of—which dramatically reduces the probability and allowing for broad inductive biases. And that's to ask question with narrower scopes, and to ensure you're the one driving conversation.It's true with programming as well. When you clearly define what you need and how things should be done, the biases are less evident. When you ask broad questions and only define desired outcomes in ambiguous terms, biases will be more likely to take over.When people ask LLMs to build the world, they will do it in extremely biased ways. This makes sense. When you ask it specifics about narrow topics, this is still be a problem, but greatly mitigated.I suppose what's happening is an inversion of cognitive load, so the human is taking on more and selecting bias such that the LLM is less free to do so. This is roughly in line with the article's premise (maybe not the entire article, though), which is fine; I think I generally agree that these are cognitive muscles that need exercising, and allowing an LLM to do it all for you is potentially harmful. But I don't think we're trapped with the outcome, we do have agency, and with care it's a technology that can be quite beneficial.
bomewish: Doh. I went in expecting a really cool thesis — because the idea seems somehow intuitive, or at least really intriguing. But I have no clue what I read. Just totally odd and unconvincing. Greenland? Dialectal substrate? The idea is still super intriguing to me though!
chunky1994: Does anyone use LLMs in such a manner that they believe it always has the most up to date information (without web search tools?).Isn't this whole thesis negated by the fact that tool calling web search exists? This just feels like a whole lot of words to say, don't treat a LLM as an always up to date infallible statistical predictor.
SegfaultSeagull: It’s a bit ironic that the author includes an AI generated audio version of the article, you know, so we don’t have to read it.
yakattak: Sounds great for people who are seeing impaired.
Retr0id: One of my "let's try out this vibecoding thing" toy projects was a custom programming language. At the time, I felt like it was my design, which I iterated on through collaborative conversations with Claude.Then I saw someone's Show HN post for their own vibecoded programming language project, and many of the feature bullet points were the same. Maybe it was partly coincidence (all modern PLs have a fair bit of overlap), but it really gave me pause, and I mostly lost interest in the project after that.
jbethune: This was a bit word-salad-y but I share the same basic concern. I think more I worry about the tendency toward greater and greater cognitive off-loading to LLMs. My sister told me a story the other day about how she caught her plumber using chatgpt on his phone to fix an issue with her bathroom. I just think it's good for humans to know how to do stuff.
dfee: your sister offloaded to her plumber.her plumber offloaded to chatgpt."i just think it's good for humans to know how to do stuff."are we talking about your sister or her plumber?
jessetemp: The plumber obviously. Not everyone needs to know how to be a plumber, but a plumber should know how to be a plumber
zozbot234: At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
palmotea: Oh no, not that tired thing again. I suppose your point is: people once were critical of the technology of writing, so all criticism of the technology-at-hand is illegitimate. You don't actually make a point, so one has to assume.Some points:1. Technological invention is not a repetition of the same phenomenon. Each invention is its own unique event, you cannot generalize the experience with previous inventions to understand the effects of the latest ones.2. Socrates may have been in large degree right. Imagine that you and your society has been locked in the sewers, condemned to wade in shit for so long that you and your ancestors long ago forgot what fresh air feels like and that the sun even exists. What would you think about your life? Would you think "this is horrible" or "this is fine"?
partyficial: he(zozbot234) could also be agreeing with OP, not disagreeing.I don't remember phone numbers anymore. If I were to lose my phone, or the cloud, I'm SOL re-adding everyone.
askonomm: So you are saying that a plumber does not in fact need to know how to be a plumber?
eaglelamp: You're misinterpreting the quote. Socrates is saying that being able to find a written quotation will replace fully understanding a concept. It's the difference between being able to quote the pythagorean theorem and understanding it well enough to prove it. That's why Socrates says that those who rely on reading will be "hard to get along with" - they will be pedantic without being able to discuss concepts freely.Likewise with AI the appearance of reasoning without the substance could lead to boring exchanges of plausible slop rather than meaningful discourse.
hn_acc1: Sure, but.. I've been coding for 40 years and I don't know everything. To me, a LOT depends on what the plumber asked chatgpt about. For example: building codes in that city, to figure out what his options are - like, is he allowed to just put in any old toilet, or is there a gpf restriction? What's the replacement part number for faucet XYZ's gasket? Those seem reasonable."how do I fix a clogged toilet?" would be bad..
_verandaguy: While I agree with your rebuke of the GP, Socrates was materially wrong about writing (or at least, about the ability to persist information beyond any single human lifetime).Cumulatively, knowledge work (including, in particular, curating knowledge) is exceptionally energy intensive from an evolutionary standpoint. It does pay dividends, clearly, but to get compounding effects from it, being able to efficiently pass down big corpora of facts, ideas, processes, etc., is an absolute necessity.Writing systems are the fundamental way through which we can do this. They worked for us for millennia, and we eventually built upon them to develop encodings used today to store information remarkably densely.
bluGill: The larger win from writing is passing down things that are not commonly needed. If you hunt antelope every year I can teach my kids. If we know there are antelope "over there", but they are easy to over hunt to so we only hunt in 100 year droughts - nobody in the village will know how to hunt them when we need to and so we need writing. (never mind how we figure out that they are easy to over hunt)
sidrag22: without a doubt yes. I'd encourage you to just try a session on a free chatgpt account, asking questions you think a parent or someone unfamiliar with the space would probably ask.It will not only answer confidently incorrect, but it will not web search in obvious scenarios where it should.The words here, aren't meant to be a warning for people in this type of community falling victim to this type of thing, its more for the general public that doesn't grasp the tools they are using, the people that wont ever wander across this article.This i think is a huge reason we really need to jump into LLM basics classes or something similar as soon as possible. People that others consider "smart" will talk about how great chatgpt or something is, then that person will go try it out because that person they respect must be right, they'll hop on the free model and get an absurdly inferior product and not grasp why. They'll ask something that requires a web search to augment info, not get that web search, and assume the confidently incorrect agent is correct.The thesis is also I think not entirely about not having that modern info at query time, its more scattered. Someone asks what product they should use to mash potatoes, a tool is suggested. Everyone that asks then receives that same recommendation and instead of having a range of different styles of mashing potatoes, we end up all drifting closer towards one style, and the range of variance in how food is prepared is slowly getting lost.
quirkot: regarding #2: how many serfs came home after re-digging the toilet hole to eat a meal of hand-milled grain bread and old vegetables with the members of the family that survived infancy and thought "life just doesn't get any better than this"? Probably almost all of them
pixl97: I mean Socrates said enough stuff that was wrong or didn't have any scientific understanding either.Simply put at humanity wide scales written information is by far the most important thing you can have. There is kind of a Sortie's paradox occurring where you have individual knowledge that can be held by one person conflicts with systems knowledge that has to be redundant and can be easily transferred.
thwarted: The issue here is that the sister could have used ChatGPT herself, so why bother hiring the plumber. The plumber has provided less value than was expected. But make no mistake: the value the sister was looking for was to have someone else deal with it, and there's a price that the sister was willing to pay for the service of having someone else deal with it.In the comments of this HN post, there is a dead comment from someone who posted an LLM's summary of another comment. It's dead because it offers very little/no value: that summary could be obtained directly from ChatGPT by anyone who wants a summary.The sister offloaded plumbing to the plumber under the economic principle of comparative advantage. The plumber undermines the value they provide by outsourcing yet again. What value is provided by the middle man who does nothing but proxy the issue? Is the person who does this really a plumber? Is a plumber merely someone who has plumbing tools like wrenches and pipe tape?That the plumber also wanted to outsource it is the concern: right now, the plumber is able to make money because of the difference between what is charged to deal with a problem and what it costs for them to deal with it. Knowledge and experience has become a commodity, which we probably can't do anything about, but along with that comes all the drawbacks (and advantages) of things, and humans, being comoditized.
danielbln: Im a software engineer and know how to be a software engineer, yet I find LLMs quite helpful. Why should a plumber be any different.
daveguy: Because if a plumber moves fast and breaks things, I've got shit all over the place.
comboy: I mean, yes but LLMs have been making me more cognitively active. I've learned how to do more stuff that I would have without them and it's a decent multiplier not some rounding error.Obviously you can have a plumber that knows his stuff and the one that doesn't. The good one can check some details and will recognize bs. If you already have the bad one it's probably if better if he uses LLM rather than when doesnt.
hdndjsbbs: The irony of quoting this particular story without providing any of the necessary context for readers. Truly an aid to reminiscence and not memory.
YackerLose: A real artificial intelligence would be capable of independent and original thought. What we have today are mere plagiarism factories. They need to be called out for what they are.
tbrownaw: Writings are fixed once written, and don't update themselves as the world changes.Writings are subject to known biases such as publication bias, and so relying on them reduces the range of what you can consider.Therefore, writing is bad for the same reasons that this post thinks that AI is bad.
asdfman123: While I understand what the paper is saying I'm not sure if what I read was written by someone who is smarter than me and naturally goes higher up the abstraction tree, or just wants to write really smart things.Either way though I think there's a much simpler way to express what she's trying to say. Offloading thinking to AI is bad because it's less flexible and doesn't easily update its reasoning with new information.
layer8: It’s a blog post, not a paper.
measurablefunc: Calculators endanger the development of mental arithmetic skills as well.
adamtaylor_13: One thing that's always been true with human communication that is becoming increasingly obvious to me through my interactions with LLMs is the art of asking a good question.The framing of questions massively affects the results you get from discussion with humans, and I'd argue it's even more pronounced with LLMs.
alfalfasprout: Yep. And this is why as hard as AI companies are pushing that these tools can be a replacement for expertise, it's ironic that the experts are the ones that often get the highest ROI because they know how to converse about the relevant subjects with a high degree of precision (and know what to look for, what to challenge, etc.).
gobdovan: By the logic that today's news is fundamental to know as true, there really is no point in reading books older than 6 months old. If Einstein woke up from a coma, he'd be useless, as he doesn't even know who won the World Cup. For real now, if an AI can help you solve a problem using 2,000 years of human logic, does it really matter if it's "skewed" away from a political shift that happened three weeks ago?I also don't believe that everybody I know is idiosyncratic in the way they view the world. And even if they were, I'd probably just pay attention to the things that are directly relevant to me. So probably I'll misunderstand most of what they say anyway.
cortesoft: This is assuming that ChatGPT had everything needed to do the work. If the plumber was asking specific questions, based on their previous experience and knowledge about what needed to be done, the sister might not have been able to get the same result from her use of ChatGPT that the plumber received.Experts look things up all the time, because no one can hold all the knowledge of a field in their head. Being an expert means being able to know what to look up and how to use the information retrieved from looking something up.In the plumber example, ChatGPT is going to tell them to do things using the terminology that plumbers know, and tell them to do tasks that plumbers know how to do. The sister would have to continually look up more and more things about how to do basic plumbing tasks, rather than just looking up particular novelties.
pixl97: Which part of being a plumber? Was the house installed with something non-typical? Would you rather have them take an additional 30 minutes looking up their technical manual?Without further knowledge of what was going on it's hard to say why they used ChatGPT.
b2ccb2: > Would you rather have them take an additional 30 minutes looking up their technical manual?Yes
enraged_camel: That, and also the plumber loses their license. So perhaps the solution is professional licensing for software engineers.
jareklupinski: > What would you think about your life? Would you think "this is horrible" or "this is fine"? Or maybe "I enjoy smell of shit and we're so much better off because we don't have to worry about sunburn"?id probably start with "who locked us in this sewer?"
SirMaster: >like, is he allowed to just put in any old toilet, or is there a gpf restriction?And if the LLM gets that wrong? It's his job to know the codes or how to go to a reliable resource to find out the correct codes.
NiloCK: You know that plumbers charge by the hour, right?
dcre: I've never seen an argument like this that, if true, wouldn't also apply to the cognitive offloading we do by relying on culture, by working with others, or working with the artifacts built by others.
Manuel_D: > In early 2026, the USA prepared to invade Greenland and, therefore, the EU4. Only a few months prior to that it was completely unthinkable that the USA would even think about threatening an invasion of Greenland. As AI base models are stuck in the past, they do not easily accept these events as real and often label them as “hypothetical”, “fake news”, or “impossible”. This also affects new models like Gemini 3 Pro, GLM-5 or GPT-5.3-codex5.Isn't this just inherent to any system that takes some time to update? E.g. if a country moves its capital to a different city, then textbooks, maps, etc. are going to contain incorrect information for a while until updated editions are published.A lot of the complaints about AI are really about the drawbacks of information systems more generally, and the failure modes pointed out are rarely novel. The "Cognitive Inbreeding" effect attributed to AI would also have occurred with Google search would it not? Lots of people type the same question into google and read the top results, instead of searching a more diverse set of information sources. It's interesting that the author mentions web search as a way to ameliorate this, when it seems to me that web search is just as capable of causing cognitive inbreeding.
layer8: Or driving. Or working around the house.
contingencies: Strong disagree. The "AI-Assisted Cognition" phrase is loaded.Would you attempt to, for example, simultaneously modify for available ingredients, number of diners, and time-optimize the prep method for a recipe you've never cooked before if you were following an old-school cookbook? No. You'd have to be a pretty solid chef to try all that on at once.Using AI, you might branch out confidently in to new areas, executing all of these modifications simultaneously, and even adapting the output for a specific audience or language.This toy example shows an important property of AI as decision support systems, which are well studied in the military domain: using these systems, we build confidence to act in unfamiliar domains, thereby extending our reach. From this experience we can learn more. The fact that the learning may then occur through, ie. during or after the experience, rather than beforehand, is secondary. It's still there. The fact we didn't know the language the AI translated to for our chef is totally irrelevant.Sitting comfortably at the effective apex of millions of years of human cognitive and technology development with the entire world's knowledge at our fingertips, every day we can extend confidence in novel domains through AI, and enjoy it. We should be feeling pretty damn "developed".Rote formalism and fixed paths in pedagogy are gone: good riddance. This is the hacker age.