Discussion
[permlink]
mark_l_watson: I understand the author’s sentiment but I would like to give a counter example:I like to read philosophy and after I read a passage and think about it, I find it useful to copy the passage into a decent model and ask for its interpretation, or if it is something old ask about word choice or meaning.I realize that I may not be getting perfect information, but LLM output gives me ideas that are a combination of live web searching and whatever innate knowledge the LLM holds in its weights.Another counter example: I have never found runtime error traces from languages like Haskell and Common Lisp to be that clear. If the error is not clear to me, sometimes using a model gets me past an error quickly.All that said, I think the author is right-on correct that using LLMs should not be an excuse to not think for oneself.
XenophileJKO: I mean it can also depend on scale. I use hundreds of sub-agent instances to do analysis that I just would not be able to do in a reasonable timeframe. That is a TON of thinking done for me.
NitpickLawyer: Yawn. Just a post++ about white-horse attitudes regarding "muh expertise". And yet the top of the top experts in their fields (Terrence Tao, Karpathy, hell even Linus) are finding ways to make them useful for them. That's the crux imo. If you can't find a way to make these tools useful for you, you are the problem. Not the LLMs. THere's something there, even if currently not much, but there's something there for everyone at this point.
zaphirplane: I was hoping for a deeper article
simianwords: If people used LLMs more we would have fewer instances of misinformation. Lots of comments in social media could easily be dispelled by a single LLM search.
chrysoprace: LLMs can be injected with biases as well. Just look at Grok's responses any time it's tagged in anything mildly political.
xyzal: "If you can't find a way to make today's tech du jour useful for you, you are the problem".Hmmmmm.
simianwords: That’s like saying journalism is not useful because journalists are biased.Bias is useful and inevitable
chrysoprace: Bad journalists are biased. Good journalists will present a story as factually as possible and as void of bias as possible (of course it's impossible to not have any biases). Opinion pieces can have as much bias as they like as long as they're strictly marked as opinions.
throwaway0665: This tech isn't going away anytime soon. It might become prohibitively expensive for individuals but it's here to stay. It's worth trying to find a use for it while it's cheap.
Nursie: Or it could give out bad information and make everything worse because a subset of people seem to think LLMs are infallible or gods rather than aggregates of the knowledge they’ve consumed.
jatari: Do you really think e.g Opus 4.6 is less reliable than the average facebook/x post (which is where most people get their news from today)?People already just blindly believe whatever is put in front of them by the algorithm gods. Even the "@Grok is this real" spam on every X post is an improvement.
xyzal: Incorrect.i.e. Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots.https://thebulletin.org/2025/03/russian-networks-flood-the-i...
simianwords: If you’ve used LLMs you’d know that they are way more correct than not.
hectormalot: There’s both a quality and quantity angle.For some work, similar to the philosophy example of GP, LLMs can help with depth/quality. Is additive to your own thinking. -> quality approachFor other things. I take a quantity approach. Having 8 subagents research, implement, review, improve, review (etc) a feature in a non critical part of our code, or investigate a bug together with some traces. It’s displacing my own thinking, but that’s ok, it makes up for it with the speed and amount of work it can do. —> quantity approach.It’s become mostly a matter of picking the right approach depending on the problem I’m trying to solve.
test001only: > you had to build a model of the world just to survive the tension?The world the author is describing currently has LLM in it. Irrespective of the author liking it or not, it is here to stay. So to build a model of the world, you would still need to consult an LLM, understand how it can give plausible looking answers, learn how to effectively leverage the tool and make it part of your toolkit. It does not mean you stop reading manuals, books or blogs. It just means you include LLM also in those list of things.
4ndrewl: Is "not having to think" a good metric now?
rgoulter: While "I don't have to think, I just get the LLM to do the task" is a bit careless (or a "hype" way of putting it)... I'd reckon it's always been true that you want to think about the stuff that matters and the other stuff to be done for minimal effort.e.g. By using a cryptography library / algorithm someone else has written, I don't need to think about it (although someone has done the thinking, I hope!). Or by using a high level language, I don't need to think about how to write assembly / machine code.Or with a tool like a spell-checker: since it checks the document, you don't have to worry about spelling mistakes.What upsets is the imbalance between "tasks which previously required some thought/effort can now be done effortlessly". -- Stuff like "write out a document" used to signal effort had been put into it.
mr_mitm: Even Knuth is starting to be impressed: https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...
htnthrow11220: Even a small % of incorrectness quickly produces compounding effects, if you view LLMs as an information source. True or false statements are made with equal confidence, because the LLM can’t distinguish true from false.
mr_mitm: I think it could be. It doesn't have to be one or the other.In my opinion it's entirely comparable to anything else that augments human capability. Is it a good thing that I can drive somewhere instead of walking? It can be. If driving 50 miles means I get there in an hour instead of two days, it can be a good thing, even though it could also be a bad thing if I replace all walking with driving. It just expands my horizon in the sense that I can now reach places I otherwise couldn't reasonably reach.Why can't it be the same with thinking? If the machine does some thinking for me, it can be helpful. That doesn't mean I should delegate all thinking to the machine.
jstanley: > “I’m Feeling Lucky” intelligence is optimized for arrival, not for becoming. You get the answer but nothing else (keep in mind we are assuming that it's a good answer). You don’t learn how ideas fight, mutate, or die. You don’t develop a sense for epistemic smell or the ability to feel when something is off before you can formally prove it.All you're saying is that you can't imagine working on a task that is longer than 1 Google Search.If "I'm feeling lucky" works by magic, that doesn't mean your life is free of all searching, it just means you get the answer to each Google Search in fewer steps, which means the overall complexity of tasks that you can handle goes up. That's good!It doesn't mean you miss out on the journey of learning and being confused, it just means you're learning and being confused about more complicated things.
delusional: That's not true. Any journalist would tell you that picking the stories you chose to cover is just as much a bias as how you chose to cover them. Even then, the specific words you pick, how you ask the interviewees, how you place the story on the page, what you pick as the "related stories". All of that is Editorial and reflects an opinion.Good journalists are open about their angle. Bad journalists tell you they are "unbiased" and "just bringing you the facts".
simonw: I mean yeah, it's Grok. They had to work really hard to get their preferred levels of political bias in there.
tony_codes: speaking of chance discovery, what a great personal website! I love the daily virtues from the diary section.
delusional: > I realize that I may not be getting perfect information, but LLM output gives me ideas that are a combination of live web searching and whatever innate knowledge the LLM holds in its weights.I don't mean to be judgemental. It's possible this is a personal observation, but I do wonder if it's not universal. I find that if I give an inch to these models thinking, I instantly become lazy. It doesn't really matter if they produce interesting output, but rather that I stop trying to produce interesting thoughts because I can't help wonder if the LLM wouldn't have output the same thing. I become TOO output focused. I mistake reading an interpretation for actually integrating knowledge into my own thinking, I disregard following along with the author.I love reading philosophy as well. Dialectic of Enlightenment profoundy shaped how I view the world, but there was not a single part of that book that I could have given you a coherent interpretation of as a read it. The interpretations all come now, years after I read it. I can't help but wonder if those interpretations would have been different, had my subcouncious been satiated by cheap explanations from the lie robot.
simianwords: Try finding an example from grok that proves this corruption works?
po1nt: I don't think the argument is correct. Reasoning LLM will check itself and search multiple sources. It's essentially doing the same mental process as human would. Also consulting multiple LLMs completely breaks this argument.
heeton: Author’s central point is that an LLM answer “is optimized for arrival, not for becoming” (to paraphrase from the Google “Lucky” part).So a reasoning LLM that does the comparisons and checks “like a human” still fails the author’s test.That said, this still feels like a skill issue. If you want to learn, see opposing views gather evidence to form your own opinions about, LLMs can still help massively. You just have to treat them research assistants instead of answer providers.
chrysoprace: In the case of publications we luckily have such fantastic resources as mediabiasfactcheck[0] to keep their bias in check and to keep them factual.LLMs are much harder to fact-check because they can make anything up based on their training data and weights without sources.[0] https://mediabiasfactcheck.com
perks_12: I google a lot (or rather, Kagi). I loved to explore the web when I was younger. But over time I lost any interest in trying to gather informational bits from increasingly shittier websites designed to have more ads and hide relevant information for as many ad slots as possible. These days I hit the quick answer button inside Kagi more often and just accept that I might have some false information in there. If it is critical to be right, I usually consult primary sources directly anyway.
ChrisMarshallNY: For myself, I’m very much a “results” guy. Have been, for all my career. I’ve been shipping (as opposed to “writing”) software for most of my adult life. People seem to like to stuff I make.I’m currently working on my first major project that incorporates heavy LLM contributions. It’s coming along great.I started with Machine Code and individual gate ICs, so my knowledge goes way down past the roots.I don’t miss it, at all. Occasionally, my understanding of stuff has been helped by that depth of experience, but, for the most part, it’s been irrelevant. It’s a first-stage booster, dropping back into the atmosphere.I will say that my original training as a bench tech has been very useful, as I’m good at finding and fixing bugs, but a lot of my experience is in the rear-view mirror.I have been routinely googling even the most basic stuff, for many years. It hasn’t corroded my intellect (yet), and I’m doing the same kind of thing with an LLM.Not being sneered at by some insecure kid is nice.
antonvs: To me this reads like “I don’t want to be able to learn faster.”The downside of the internet is that we get to see people agonizing over their inability to adapt to change.
stavros: Agreed, all this "but if you don't need certain skills any more, you'll lose them!" is tiring, and even more tiring because it's missing the entire point: yes, because I don't need them any more!It feels like I'm reading an article crying "if you buy a car, you will lose your horse-shoeing skills!" every day lately.
XenophileJKO: I guess, how often do you pay someone to fix your car? Repair something in your house? Give you financial advice?Those are all things many people outsource their thinking to other people with.
jmfldn: I'm somewhere in between. I'm excited about building more things faster and extending my capabilities. But I also love thinking about the underlying language, runtime, algorithms, the wider system. I want LLMs to enhance this for me, I want my understanding to go up as I write less code. It's also key to my job as a lead that I maintain understanding of the system for debugging, security etc.So if I can do both with these tools, then great. I want to cognitively offload in a way that allows me to focus on the important bits. And I'm writing instructions to the LLM to help me do that eg 'help teach me this bit'. A builder and tutor at once.
siquick: > These days I hit the quick answer button inside Kagi more oftenJust incase you didn’t know you can append ? to any query and get a quick answer straight away
bsza: IME, even when an LLM is right, a few follow-up questions always lead to some baffling cracks in its reasoning that confirm it has absolutely no idea what it's talking about. Not just about the subject but basic common sense. I definitely wouldn't call it the "same mental process" a human does. It is an alien intelligence, and exposing a human mind to it won't necessarily lead to the same (or better) outcome as learning from other humans would.
v3xro: > Not being sneered at by some insecure kid is nice.How very adult of you.
ChrisMarshallNY: I have found that having a humble, generous, and non-caustic approach to other people, has been very good for my personal mental health (and career).I have spent my entire career, being the dumbest guy in the room, and I’m not exactly a dunce. It can sometimes be quite humbling, but I’ve had great opportunities to learn.People will often be willing to go out of their way to help you understand, if you treat them with respect; even if they are being jerks.Life’s too short, to be spending in constant battle.
v3xro: No, you outsource it because it's not your core competency. I think humans should be able to do anything and not narrowly specialise as narrow specialisation leads to tunnel vision. Sometimes you need to outsource to someone because of legal reasons (and rightly so, mostly because the complexities involved do require someone who is a professional in that area). Can some things be simplified? Of course they can, and there are many barriers that prevent such simplification. But it's absolutely insane to say - nah, we don't need to think at all, and something else can do all the work.
ChrisMarshallNY: Great approach.I usually use ChatGPT, as a chat (as opposed to an agent).It explains everything quite well (if sometimes a bit verbose).Today, I am going to do an experiment: I’ll be asking it to rewrite the headerdocs in one of my files, so it generates effective docc documentation. I suspect the result will be good.
kator: The "calculator ruined the world" argument was actually studied to death once the panic subsided. Large meta-analyses of 50 years of data show it was mostly a non-problem. Students using calculators generally developed better attitudes toward math and attempted more complex problems because the mechanical drudgery was gone.The only real "catch" researchers found was timing. If you introduce them before a kid has "automaticity" (around 4th grade), they never develop a baseline number sense, which makes high-level math harder later on.It's a pretty clean parallel for LLMs. The tool isn't the problem, but using it to bypass the "becoming" phase of a skill usually backfires. If you use an LLM before you know how to structure an argument or a block of code, you're just building on sand.
kolinko: For me it's the opposite - sure, for many outputs I don't need to think, but then I end up thinking on a higher level, and doing even more work.An analogy would be - if GPS allows you to not worry about which turn to take, you can finally focus on where you want to get.
n0on3: Would you be aware of it if that was the case? I don’t mean this to be hostile or anything but the senario in which one does not notice himself and it goes unnoticed or silently accepted externally does not seem too far fetched to me.
kolinko: I think the author just doesn't know how to use LLMs well."Because what would be missing isn’t information but the experience. And experience is where intellect actually gets trained."From my experience, LLMs don't cause this effect. You still get to explore a ton of dead ends and whatnot, just on a much higher level."You get the answer but nothing else (keep in mind we are assuming that it's a good answer)."On the contrary here - you get to answer a ton of followup questions easily, something you don't get to ask books."I never so far asked GPT about something that I'm specialized at, and it gave me a sufficient answer that I would expect from someone who is as much as expert as me in that given field."LLMs are at a level of junior-mid of any field (and going higher every year), not senior-master. Is that anything new? Their strength is, among other things, in making connections between fields, and also the availability.If you have an option to talk to a specialist in your field that has time 24/7 to discuss ideas with you, that's great, but also highly unusual. If you don't have such a person, an LLM that is junior-mid is way better than plain books.
cmiles8: I see the SDE pushback on LLMs, but most of it is unfounded. Like any new tool, if used irresponsibly of course bad things can happen. Most of the backlash from devs seems rooted in:1. It causes a step change in productivity for those that use it well, and as a result a step change in the expectations on productivity for dev teams. Folks simply expect things to be done faster now and that’s annoying folks that have to do the building.2. It’s removed much of the mystique of dev. When the CEO is vibe coding legit apps on their own suddenly the SDE team is no longer this mysterious oracle that one can’t challenge or question because nobody else can do what they do. Now everyone can do what they do. Not to the same degree, yes, but it’s completely changed the dynamic and that’s just annoying some devs.SDEs aren’t going away, but we will likely need less moving forward and the expectations on how long things take have changed forever. Like anything in tech we’re not going back to the old way so you either evolve or you get cycled out. Honestly, some devs right now look like switchboard operators yelling at the ability of people to self-dial a telephone. Did they do it better? Yeah… but the switchboard isn’t coming back.
ChrisMarshallNY: Probably, yes. If we don’t have a clear understanding of the fundamentals, it can make life difficult. My personal experience has been “layers,” with the new layer building upon, and often subsuming, the substrate.But it does mean that there’s limits. A lot of folks start at points higher than mine, and can go much further than me.That’s fine; as long as I understand and accept my limitations, as well as my strengths.It’s a long story, but I spent the majority of my career at some pretty demanding and high-functioning places. I had to learn to be self-critical, without being self-abusive.Also, I have terrible memory. I suspect it comes from … experimentation … in my youth. Doesn’t make me stupid, but has given me a talent for leveraging reference materials.
rsfern: I like this analogy of always choosing “I’m feeling lucky” on Google, I feel like it clarifies a boundary between information retrieval and evaluation that gets blurred by language model summarizations. I’ve been frustrated with the LLM summary at the top of the Google search results for scientific topics because often the sources linked to don’t actually contain the information the summary is citing them for. Then I have a side quest of finding the right backing literature or deciding the summary was just wrong in the first place
demorro: Seconding this. Revelation happens subtly, often far removed from what you might later unpick as its "primary source". Immediate interpretation tend to be plastic and shallow.
simianwords: That’s the same as in normal journalism so I’m not sure why you pick LLMs as particularly bad
simianwords: And biased journalism is still useful and informational.
lostmsu: [delayed]
maplethorpe: > A tool can be efficient and still be intellectually corrosive, not because it lies all the time, but because it lies well enough. Its smoothness hides uncertainty, which is important unless you want intellect-rot.I keep seeing sentiments like this, but to me they're still very much stuck in the past.We once needed to develop our intellects in order to solve problems. It was a necessary part of the process. Solving a particular problem would flood our brains with dopamine, and we would feel good for achieving our goal, and thus continue to develop our intellect in the hope of achieving a similar rush in the future.Now that we have a machine that can solve our problems for us, intellect just plain isn't necessary anymore. We can solve our problems immediately and skip the roundabout intellect-building process entirely. That's a liberating thing.
simianwords: I don’t think you are wrong but isn’t it obvious to pick and choose cases where you might want to use LLMs vs doing the work? Seems obvious to me.Sure if you want to read a novel, don’t ask an llm about it.When you want to learn something quick then use LLMs. But you would know how much compression is going on. This is something we do routinely anyway. If I want to know something about taxes, I read the first google result and get the gist of it. But I’m still better off and didn’t require to take a full course.
carrychains: Same can be said of search engines, encyclopedias, or wikis compared to seeking out books, journals, and other source material. If you don't sit there for 8 hours in a library to find the same information on your own, you've missed out on the experience. It's a standard Luddite's argument. Tools of any kind that enhance efficiency have always actualized lazy outcomes. It has always been the human responsibility to, not only rely on their best effort, but to figure out what actually encompasses their best possible effort.
hexaga: is this satire
carlosjobim: Why would you even read philosophy if you're then consulting a third party for interpretation? That is the definition of meaningless.That is like listening to music and asking somebody if you liked a song.
tylervigen: I would say it's more like enjoying a song so much that you choose to listen to a cover of that song.
disinz: > in my experience LLM code is actually better documentedMost LLM documentation is extremely poor unless you literally can’t understand code. It’s communicating obvious things that the code makes apparent. Good documentation covers what’s not obvious from reading the code. I’m amazed at how many devs think running Claude init is a good idea.I’ve noticed a constant trend throughout my career that low performing devs generate and are impressed with this kind of documentation.> and structured than what most dev teams produceLeft alone, LLM structure is terrible. Speed running the cheapest outsourced slop is not a good thing unless you’re already paying for outsourced slop. Huge win in that case.LLMs are median equalizers. I think a large contingent of devs (especially given the money grab around 2021) are not that good, and LLMs really are a step improvement for them. I’m not denying there are use cases for competent devs, but so far that seems much narrower (albeit insanely impressive) than the benefit low performers (think) they receive. This fact is important to keep in mind when discussing the future of dev.
simianwords: Any examples of political bias?
sph: Also it might be hard to grasp for most of us, used to constant stimulation and lack of space for contemplation and incorporation of information (I recommend the works of philosopher Byung-Chul Han on the matter) with yet unknown effects on our psyche and creative output. It takes days or weeks for one to sit and digest novel viewpoints; asking a machine to skip all that work for us is just another example of seeking instant gratification. I have no time to think, do it for me, so I can scroll to the next post already.
Rapzid: There is definitely a sort of undercurrent online to a lot of the claims and animosity towards anyone questioning LLMs or even exploring the downsides. Tons of anger and vitriol; on HN now even.I get the sense a lot of new and middling engineers view this as some shortcut to success. Angry about getting passed on promotions, push back on their PRs, questioned why things are taking so long(hint: because they weren't actually putting in a full days work), or not making the grade for FAANG.Now here is LLM and it's their chance to get stuff done without having to do much work. Don't need to have put in the work to learn and study. Now all those that were keeping them down before are the REAL problem because they are too slow, too out of date. What a gift! It's the great mediocracy uprising!I haven't quite wrapped my head around it but there is definitely some weird social stuff going on.
mr_mitm: Nobody said "we don't need to think at all" though. The statement was "not having to think", or rephrased: "being able to choose how much to think or what to think about".
em-bee: but the point is that the metal process should be done by yourself. it is the difference between finding the answer myself or asking my classmate to just share his answer with me. in the latter case i am not learning what my classmate learned.