Discussion
Probabilistic engineering and the 24-7 employee
tra3: > Agents are opening pull requests, reviewing each other's work, and closing them without a human ever touching the keyboard, with a continuously live log monitoring loop to rapidly fix issues.I know gas town made a splash here a while back and some colleagues promote software factories, but I haven’t seen much real output..have any of you?I prefer the guided development approach where it’s a pretty detailed dialog with the LLM. The results are good but it’s hardly hands off.If I squint I can almost see this fully automated development life cycle, so why aren’t there real life examples out there?
jcims: No idea how automated it is but it's clearly accelerated since last Dec.https://code.claude.com/docs/en/changelog
notpachet: Counterargument. The author is primarily looking at AI trend lines. Let's say our industry continues moving along alternate, equally compelling, trend lines: increasing global volatility, chaos in the energy markets, growing likelihood of great power conflict this century, climate collapse, mass migration, societal unrest, yada yada.What happens to all of these AI-native companies if the AI bubble is not able to survive in these conditions? If your current development process is built on the metabolic equivalent of 400kg of leaves per day[0], then when the allegorical asteroid hits, you're going to be outperformed by smaller, nimbler companies with much lower resource requirements. Those companies may be better suited for survival in hostile macro conditions.In other words, I think a lot of companies believe that they're trimming their metabolic fat by replacing engineers with AI. Lower salary costs! But at the same time, they're also increasing their reliance on brittle energy infrastructure that may not survive this century. (Not to mention the brittleness of the semiconductor fabrication pipeline, RAM availability, etc)[0] https://en.wikipedia.org/wiki/Apatosaurus
grebc: The one thing that’s true in that article is the output of bad coders/programmers/developers/engineers is certainly increasing.Good luck to anyone cleaning up the mess.
Flux159: I think the reason we're not seeing many examples yet is that the full loop doesn't work completely autonomously yet. There's still a human in the loop at some critical points - specifically testing against a spec (runtime testing if say working on web or mobile app before shipping to users). LLMs can do compile time testing and validation, unit tests, and can write your end to end tests, but if you're shipping software to users, there's still a human somewhere involved. This isn't even mentioning marketing and actually getting your software into the hands of users - which while it can be automated, a lot of marketing with AI is still sloppy.
jackdoe: God damn metanoia.I feel like the internet is programming me.At this point it is impossible to tell if AI writes like people or people write like AI.
grebc: Tim’s definitely artificial.
jnpnj: I personally noted that I'm starting to use some LLM idioms "it's not just .. it's .." and I don't like it. I'm actually trying to stop using computers and read books to replenish my mind with more diverse idioms.
jackdoe: same, I also try not to read claude's output that much, and I have a copy of Gibson's Mona Lisa and just open it while it is thinknig, for music and even for CS stuff, I search with before:2022 on youtubebut the ship has sailed :)there is no hiding from itof course the content we consume modifies us, but now everybody "reads" the same book, whatever they read.