Discussion
Reports of code's death are greatly exaggerated
rvz: From "code" to "no-code" to "vibe coding" and back to "code".What you are seeing here is that many are attempting to take shortcuts to building production-grade maintainable software with AI and now realizing that they have built their software on terrible architecture only to throw it away, rewriting it with now no-one truly understanding the code or can explain it.We have a term for that already and it is called "comprehension debt". [0]With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.This is exactly happening to engineers at AWS with Kiro causing outages [1] and now requiring engineers to manually review AI changes [2] (which slows them down even with AI).[0] https://addyosmani.com/blog/comprehension-debt/[1] https://www.theguardian.com/technology/2026/feb/20/amazon-cl...[2] https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77...
suzzer99: > With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.I've had to work on multiple legacy systems like this where the original devs are long gone, there's no documentation, and everyone at the company admits it's complete mess. They send you off with a sympathetic "good luck, just do the best you can!"I call it "throwing dye in the water." It's the opposite of fun programming.On the other hand, it takes a lot of creativity and cleverness to get the app to do what you want with the minimally-invasive amount of code changes. So it should be the hardest for AI.
Insanity: While I agree with everything you said, Amazon’s problems aren’t just Kiro messing up. It’s a brain drain due to layoffs, and then people quitting because of the continuous layoff culture.While publicly they might say this is AI driven, I think that’s mostly BS.Anyway, that doesn’t take away from your point, just adds additional context to the outages.
idopmstuff: I don't know that people are saying code is dead (or at least the ones who have even a vague understanding of AI's role) - more that humans are moving up a level of abstraction in their inputs. Rather than writing code, they can write specs in English and have AI write the code, much in the same way that humans moved from writing assembly to writing higher-level code.But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English. There are probably a lot of cases where you could write an instruction unambiguously in English, but it'd end up being much longer because English is much less precise than any coding language.I think we'll see the same in photo and video editing as AI gets better at that. If I need to make a change to a photo, I'll be able to ask a computer, and it'll be able to do it. But if I need the change to be pixel-perfect, it'll be much more efficient to just do it in Photoshop than to describe the change in English.But much like with photo editing, there'll be a lot of cases where you just don't need a high enough level of specificity to use a coding language. I build tools for myself using AI, and as long as they do what I expect them to do, they're fine. Code's probably not the best, but that just doesn't matter for my case.(There are of course also issues of code quality, tech debt, etc., but I think that as AI gets better and better over the next few years, it'll be able to write reliable, secure, production-grade code better than humans anyway.)
pacman128: In a chat bot coding world, how do we ever progress to new technologies? The AI has been trained on numerous people's previous work. If there is no prior art, for say a new language or framework, the AI models will struggle. How will the vast amounts of new training data they require ever be generated if there is not a critical mass of developers?
zer00eyz: > We have a term for that already and it is called "comprehension debt".This isn't any different than the "person who wrote it already doesn't work here any more".> now requiring engineers to manually review AI changes [2] (which slows them down even with AI).What does this say about the "code review" process if people cant understand the things they didn't write?Maybe we have had the wrong hiring criteria. The "leet code", brain teaser (FAANG style) write some code interview might not have been the best filter for the sorts of people you need working in your org today.Reading code, tooling up (debuggers, profilers), durable testing (Simulation, not unit) are the skill changes that NO ONE is talking about, and we have not been honing or hiring for.No one is talking about requirements, problem scoping, how you rationalize and think about building things.No one is talking about how your choice of dev environment is going to impact all of the above processes.I see a lot of hype, and a lot of hate, but not a lot of the pragmatic middle.
xienze: > This isn't any different than the "person who wrote it already doesn't work here any more".Yeah but that takes years to play out. Now developers are cranking out thousands of lines of “he doesn’t work here anymore” code every day.
justonceokay: Most art forms do not have a wildly changing landscape of materials and mediums. In software we are seeing things slow down in terms of tooling changes because the value provided by computers is becoming more clear and less reliant on specific technologies.I figure that all this AI coding might free us from NIH syndrome and reinventing relational databases for the 10th time, etc.
danielbln: Inject the prior art into the (ever increasing) context window, let in-context-learning to its thing and go?
deadbabe: My problem is that while I know “code” isn’t going away, everyone seems to believe it is, and that’s influencing how we work.I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.
idopmstuff: As a former PM, I will say that if you want to stop something from happening at your company, the best route is to come off very positive about it initially. This is critical because it gives you credibility. After my first few years of PMing, I developed a reflex that any time I heard a deeply stupid proposal, I would enthusiastically ask if I could take the lead on scoping it out.I would do the initial research/planning/etc. mostly honestly and fairly. I'd find the positives, build a real roadmap and lead meetings where I'd work to get people onboard.Then I'd find the fatal flaw. "Even though I'm very excited about this, as you know, dear leadership, I have to be realistic that in order to do this, we'd need many more resources than the initial plan because of these devastating unexpected things I have discovered! Drat!"I would then propose options. Usually three, which are: Continue with the full scope but expand the resources (knowing full well that the additional resources required cannot be spared), drastically cut scope and proceed, or shelve it until some specific thing changes. You want to give the specific thing because that makes them feel like there's a good, concrete reason to wait and you're not just punting for vague, hand-wavy reasons.Then the thing that we were waiting on happens, and I forget to mention it. Leadership's excited about something else by that point anyway, so we never revisit dumb project again.Some specific thoughts for you:1. Treat their arguments seriously. If they're handwaving your arguments away, don't respond by handwaving their arguments away, even if you think they're dumb. Even if they don't fully grasp what they're talking about, you can at least concede that agents and models will improve and that will help with some issues in the future.2. Having conceded that, they're now more likely to listen to you when you tell them that while it's definitely important to think about a future where agents are better, you've got to deal with the codebase right now.3. Put the problems in terms they'll understand. They see the agent that wrote this feature really quickly, which is good. You need to pull up the tickets that the senior developers on the team had to spend time on to fix the code that the agent wrote. Give the tradeoff - what new features were those developers not working on because they were spending time here?4. This all works better if you can position yourself as the AI expert. I'd try to pitch a project of creating internal evals for the stuff that matters in your org to try with new models when they come out. If you've volunteered to take something like that on and can give them the honest take that GPT-5.5 is good at X but terrible at Y, they're probably going to listen to that much more than if they feel like you're reflexively against AI.
kstrauser: That’s factually untrue. I’m using models to work on frameworks with nearly zero preexisting examples to train on, doing things no one’s ever done with them, and I know this because I ecosystem around these young frameworks.Models can RTFM (and code) and do novel things, demonstrably so.
gedy: When I started my professional life in the 90s, we used Visual J++ (Java) and remember all this damn code it generated to do UIs...I remember being aghast at all the incomprehensible code and "do not modify" comments - and also at some of the devs who were like "isn't this great?".I remember bailing out asap to another company where we wrote Java Swing and was so happy we could write UIs directly and a lot less code to understand. I'm feeling the same vibe these days with the "isn't it great?". Not really!
cactusplant7374: > But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English.Unless the defect rate for humans is greater than LLMs at some point. A lot of claims are being made about hallucinations that seem to ignore that all software is extremely buggy. I can't use my phone without encountering a few bugs every day.
derrak: Maybe you’re right about modern LLMs. But you seem to be making an unstated assumption: “there is something special about humans that allow them to create new things and computers don’t have this thing.”Maybe you can’t teach current LLM backed systems new tricks. But do we have reason to believe that no AI system can synthesize novel technologies. What reason do you have to believe humans are special in this regard?
soumyaskartha: Every few years something is going to kill code and here we are. The job changes, it does not disappear.
suzzer99: For future greenfield projects, I can see a world where the only jobs are spec-writer and test-writer, with maybe one grumpy expert coder (aka code janitor) who occasionally has to go into the code to figure out super gnarly issues.
drzaiusx11: This is already happening, many days I am that grumpy "code janitor" yelling at the damn kids to improve their slop after shit blows up in prod. I can tell you It's not "fun", but hopefully we'll converge on a scalable review system eventually that doesn't rely on a few "olds" to clean up. GenAI systems produce a lot of "mostly ok" code that has subtle issues you on catch with some experience.Maybe I should just retire a few years early and go back to fixing cars...
suzzer99: Yeah I imagine it has to be utterly thankless being the code janitor right now when all the hype around AI is peaking. You're basically just the grumpy troll slowing things down. And God forbid you introduce a regression bug trying to clean up some AI slop code.Maybe in the future us olds will get more credit when apps fall over and the higher ups realize they actually need a high-powered cleaner/fixer, like the Wolf in Pulp Fiction.
allthetime: I’ve got a “I haven’t written a line of code in one year” buddy whose startup is gaining traction and contracts. He’s rewritten the whole stack twice already after hitting performance issues and is now hiring cheap juniors to clean up the things he generates. It is all relatively well defined CRUD that he’s just slapped a bunch of JS libs on top of that works well enough to sell, but I’m curious to see the long term effects of these decisions.Meanwhile I’m moving at about half the speed with a more hands on approach (still using the bots obviously) but my code quality and output are miles ahead of where I was last year without sacrificing maintain ability and performance for dev speed
jedberg: People are doing this now. It's basically what skills.sh and its ilk are for -- to teach AIs how to do new things.For example, my company makes a new framework, and we have a skill we can point an agent at. Using that skill, it can one-shot fairly complicated code using our framework.The skill itself is pretty much just the documentation and some code examples.
NewJazz: A framework is different than a paradigm shift or new language.
jedberg: Yes and no. How does a human learn a new language? They use their previous experience and the documentation to learn it. Oftentimes they way someone learns a new language is they take something in an old language and rewrite it.LLMs are really good at doing that. Arguably better than humans at RTFM and then applying what's there.
danaris: That's irrelevant.The claim being made is not "no computer will ever be able to adapt to and assist us with new technologies as they come out."The claim being made is "modern LLMs cannot adapt to and assist us with new technologies until there is a large corpus of training data for those technologies."Today, there exists no AI or similar system that can do what is being described. There is also no credible way forward from what we have to such a system.Until and unless that changes, either humans are special in this way, or it doesn't matter whether humans are special in this way, depending on how you prefer to look at it.
rishabhaiover: > We're not confused with writing because there's nothing mystical about syntactically correct sentences in the same way there is about running code. Nobody is out there claiming that ChatGPT is putting the great novelists or journalists out of jobs. We all know that's nonsenseIf you compare great writing to programming then basically most of the industry vanishes and a few programming poets survive.
adamiscool8: After thousands of years of research we still don’t fully understand how humans do it, so what reason (besides a sort of naked techno-optimism) is there to believe we will ever be able to replicate the behavior in machines?
CamperBob2: In a chat bot coding world, how do we ever progress to new technologies?Funny, I'd say the same thing about traditional programming.Someone from K&R's group at Bell Labs, straight out of 1972, would have no problem recognizing my day-to-day workflow. I fire up a text editor, edit some C code, compile it, and run it. Lather, rinse, repeat, all by hand.That's not OK. That's not the way this industry was ever supposed to evolve, doing the same old things the same old way for 50+ years. It's time for a real paradigm shift, and that's what we're seeing now.All of the code that will ever need to be written already has been. It just needs to be refactored, reorganized, and repurposed, and that's a robot's job if there ever was one.
rustystump: While i dont disagree with the larger point here i do disagree that all the code we ever need has been written. There are still soooooo many new things to uncover in that domain.
rglover: It's only dead to those who are ignorant to what it takes to build and run real systems that don't tip over all the time (or leak data, embroil you in extortion, etc). That will piss some people off but it's worth considering if you don't want to perma-railroad yourself long-term. Many seem to be so blinded by the glitz, glamour, and dollar signs that they don't realize they're actively destroying their future prospects/reputation by getting all emo about a non-deterministic printer.Valuable? Yep. World changing? Absolutely. The domain of people who haven't the slightest clue what they're doing? Not unless you enjoy lighting money on fire.
derrak: > non-deterministic printer.I interpret non-deterministic here as “an LLM will not produce the same output on the same input.” This is a) not true and b) not actually a problem.a) LLMs are functions and appearances otherwise are due to how we use themb) lots of traditional technologies which have none of the problems of LLMs are non-deterministic. E.g., symbolic non-deterministic algorithms.Non-determinism isn’t the problem with LLMs. The problem is that there is no formal relationship between the input and output.
badc0ffee: You're probably using an IDE that checks your syntax as you type, highlighting keywords and surfacing compiler warnings and errors in real time. Autocomplete fills out structs for you. You can hover to get the definition of a type or a function prototype, or you can click and dig in to the implementation. You have multiple files open, multiple projects, even.Not to mention you're probably also using source control, committing code and switching between branches. You have unit tests and CI.Let's not pretend the C developer experience is what it was 30 years ago, let alone 50.
CamperBob2: I disagree that any of those things are even slightly material to the topic. It's like saying my car is fundamentally different from a 1972 model because it has ABS, airbags, and a satnav.
badc0ffee: You said C developers are doing things the "same old way" as always.
picafrost: So much of society's intellectual talent has been allocated toward software. Many of our smartest are working on ad-tech, surveillance, or squeezing as much attention out of our neighbors as possible.Maybe the current allocation of technical talent is a market failure and disruption to coding could be a forcing function for reallocation.
oblio: Those are business goals that don't just go away because tech changes.
derrak: The Church-Turing thesis comes to mind. It would at least suggest that humans aren’t capable of doing anything computationally beyond what can be instantiated in software and hardware.But sure, instantiating these capabilities in hardware and software are beyond our current abilities. It seems likely that it is possible though, even if we don’t know how to do it yet.
sd9: LLMs are very much NIH machines
ffsm8: i'd go one step further, they're going to turbo charge the NIH syndrome and treat every code file as a seperate "here"
sophrosyne42: The church turing thesis is about following well-defined rules. It is not about the system that creates or decides to follow or not follow such rules. Such a system (the human mind) must exist for rules to be followed, yet that system must be outside mere rule-following since it embodies a function which does not exist in rule-following itself, e.g., the faculty of deciding what rules are to be followed.
oooyay: > I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.You let them play out. Shift-left was similar to this and ultimately ended in part disaster, part non-accomplishment, and part success. Some percentage of the industry walked away from shift-left greatly more capable than the rest, a larger chunk left the industry entirely, and some people never changed. The same thing will likely happen here. We'll learn a lot of lessons, the Overton window will shift, the world will be different, and it will move on. We'll have new problems and topics to deal with as AI and how to use it shifts away from being a primary topic.
noident: Shift-left was a disaster? A large number of my day to day problems at work could be described as failing to shift-left even in the face of overwhelmingly obvious benefits
sophrosyne42: Its not an assumption, it is a fact about how computers function today. LLMs interpolate, they do not extrapolate. Nobody has shown a method to get them to extrapolate. The insistence to the contrary involves an unstated assumption that technological progress towards human-like intelligence is in principle possible. In reality, we do not know.
lifis: You can have the LLM itself generate it based on the documentation, just like a human early adopter would
stravant: Thousands of years?We've only had the tech to be able to research this in some technical depth for a few decades (both scale of computation and genetics / imaging techniques).
derrak: We can keep our discussion about church turing here if you want.I will argue that the following capacities: 1. creating rules and 2. deciding to follow rules (or not) are themselves controlled by rules.
woeirua: The argument here seems to be “you need AGI to write good code. Good code is required for… reasons. AGI is far away. Therefore code is not dead.”First, I disagree that good code is required in any sense. We have decades of experience proving that bad code can be wildly successful.Second, has the author not seen the METR plot? We went from: LLMs can write a function to agents can write working compilers in less than a year. Anyone who thinks AGI is far away deserves to be blindsided.
anematode: In agree in principle, but the compiler is a terrible example given the amount of scaffolding afforded to the LLMs, literally hundreds of thousands of test cases covering all kinds of esoteric corners.