Discussion
johsole: This is a great pdf and well worth the read. I've had a lot of the same questions in my head and glad to see they are concerns others are facing as well.
zer00eyz: > Engineering quality doesn't disappear when AI writes code. It migrates to specs, tests, constraints, and risk management.> Code review is being unbundled. Its four functions (mentorship, consistency, correctness, trust) each need a new home.> If code changes faster than humans can comprehend it, do we need a new model for maintaining institutional knowledge?The humans we have in these roles today are going to suffer. The problem starts at hiring, because we rewarded memorization of algorithms, and solving inane brain teasers rather than looking for people with skills at systems understanding, code reading (this is a distinct skill) and the ability to do serious review (rather than bike shed Ala tabs over spaces).LLM's are just moving us from hammers and handsaws to battery powered tools. For decades the above hiring practices were "how fast can you pound in nails" not "are you good at framing a house, or building a chair".And we're still talking about LLM's in the abstract. Are you having a bad time because you're cutting and pasting from a browser into your VIM instance? Are you having a bad time because you want sub-token performance from the tool? Is your domain a gray box that any new engineer needs to learn (and LLM's are trained, they dont learn).Your model, your language, your domain, the size of task your asking to be delivered, the complexity of your current code base are as big a part of the conversation. Simply what you code in and how you integrate LLMs really matters to the outcome. -- And without this context were going to waste a lot of time talking past each other.Lastly, we have 50 years of good will built up with end users that the systems are reliable and repeatable. LLM's are NOT that, even I have moments where it's a stumbling block and I know better. It's one of a number of problems that were going to face in the coming decade. This issue, along side security, is going to erode trust in our domain.I'm looking forward to moving past the hype, hope and hate, and getting down to the brass tacks of engineering. Because there is a lot of good to be had, if we can just manage an adult conversation!
bmurphy1976: @dang this is a very interesting and relevant doc. I think it needs another chance at making it to the front page.This is a fairly easy to read doc discussing some of the challenges with using AI tooling in a forward thinking and disciplined way. Coming from Thoughtworks it also gives a bit of gravitas and legitimacy.There's good stuff in here. It would be a shame for the larger HN community to miss out on this conversation.
kingkongjaffa: > Coming from Thoughtworks it also gives a bit of gravitasWhy? I thought the opposite. Consultancies, of which thoughtworks is one, publish thought leadership as marketing material.
Rapzid: > "Where does the rigor go?"> Engineering quality doesn't disappear when AI writes code. It migrates to specs, tests, constraints, and risk management.These are generic "thoughts" you can get from any agency pushing AI SDLC. The pages I read through left me wondering if there was even a real retreat.
mlinhares: Same, I'm seeing people having a lot of difficulty working with agents and providing prompts that can have the agent go end-to-end on the work. They just can't write prose and explain a problem in a way that the agent can go out and work and come back with a solution, they can only do the "little change with claude code" workflow and that just makes you less productive.I don't think the industry is ready or has the manpower to deliver on all the promises they're making and a lot of businesses and people will suffer because of that.
echelon: > practitioners are exploring how to make incorrect code unrepresentable.I'll say it again and again and again: Rust is the best language for ML right now.You get super strict, low-defect code. If it compiles, that's already in a way a strong guarantee.Rust just needs to grow new annotations and guarantees. "nopanic", "nomalloc", etc., and it would be perfect. Well, that and a macro-stripped mode to compile faster. I'd happily swap AOT serde compilation (as amazing as Serde is) for permanent codegen that gets checked in and compiles fast.
dang: Ok, let's give it a try. (Btw, @dang doesn't work reliably - for that you need to email hn@ycombinator.com. I only saw this by accident.)
codethief: I think the original title is better than the current one, though: "The future of software engineering – [Thoughtworks] retreat findings and strategic insights"
voxleone: >>Where does engineering go?Up the abstraction ladder; we conceive axioms and constraints; we define actors and objects; we direct rules, flows, sequences and say when and how each one of them lives and dies.May you live interesting times (some say this is a curse)
Pasanpr: > The product management side of this equation is equally unsettled. If developers are now thinking more about what to build and why, they are doing work that used to belong to product managersIt's not clear to me why this is true. If LLMs are writing code, why are developers simply not orchestrating the completion of more features instead of moving up the stack to do product development work? Is there some implication that the existence of LLMs also enables developers to run user studies, evaluate business metrics and decide on strategy?Additionally, if PMs can use LLMs to increase velocity in their work why not focus on all the things that used to be deprioritized? Why, with the freed up time, is generating code the best outcome?These questions likely have different answers depending on organization size but I'm not sure I understand why orgs wouldn't just do more work in this scenario instead of blending responsibilities. It's not like there's infinite mental bandwidth just because an LLM is generating the code
drivebyhooting: Because feature development speed wasn’t the bottleneck a lot of times.