Discussion
Greg Knauss Is Losing Himself
gtowey: LLMs can't be strategic because they do not understand the big picture -- that the real work of good software is balancing a hundred different constraints in a way that produces the optimal result for the humans who use it.It's not all that different from the state of big corp software today! Large organizations with layers of management tend to lose all abiliy to keep a consistent strategy. They tend to go all in on a single dimension such as ROI for the next quarter, but it misses the bigger picture. Good software is about creating longer term value and takes consistent skill & vision to execute.Those software engineers who focus on this big picture thinking are going to be more valuable than ever.
pixl97: >LLMs can't be strategic because they do not understand the big pictureWhile I do tend to believe you, what evidence based data do you have to prove this is true?
gtowey: > While I do tend to believe you, what evidence based data do you have to prove this is true?IMO the onus is to prove that they can be strategic. Otherwise you're asking me to prove a negative.
bee_rider: Why can’t LLMs understand the big picture? I mean, a lot of companies have most of their information available in a digital form at this point, so it could be consumed by the LLM.I think if anything, we have a better chance in the little picture: you can go to lunch with your engineering coworkers or talk to somebody on the factory floor and get insights that will never touch the computers.Giant systems of constraints, optimizing many-dimensional user metrics: eventually we will hit the wall where it is easier to add RAM to machines than humans.
troupo: > Why can’t LLMs understand the big picture?Because LLMs don't understand things to begin with.Because LLMs only have access to aource code and whatever .md files you've given them.Because they have biases in their training data that overfit them on certain solutions.Because LLMs have a tiny context window.Because LLMs largely suck at UI/UX/design especially when they don't have referense images.Because...
bee_rider: > Because LLMs don't understand things to begin with.Ok, that’s fair. But I think the comment was making a distinction between the big picture and other types of “understanding.” I agree that it is incorrect to say LLMs understand anything, but I think that was just an informal turn of phrase. I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.
vonneumannstan: Task Time horizons are improving exponentially with doubling times around 4 months per METR. At what timescale would you accept that they "can be strategic"? Theres little reason to think they wont be at multi week or month time horizons very soon. Do you need to be strategic to complete multi month tasks?
slibhb: > It was a nice little feature that I knew exactly how to do, but I hadn’t prioritized getting done yet because there were a bunch of other things on my plate. But with a little assist, it was quick to implement.Exactly how I feel. AI has allowed me to work on projects that I've wanted to work on but didn't have the time/energy for.
butILoveLife: >Good software is about creating longer term value and takes consistent skill & vision to execute.>Those software engineers who focus on this big picture thinking are going to be more valuable than ever.Not to rain on our hopes, but AI can give us some options and we can pick the best. I think this eliminates all middle level positions. Newbies are low cost and make decisions that are low stakes. The most senior or seniors can make 30 major decisions per day when AI lays them out.I own a software shop and my hires have been: Interns and people with the specific skill of my industry(Mechanical engineers).2 years ago, I hired experienced programmers. Now I turn my mechanical engineers into programmers.
zaphar: So what you are a saying is that you removed the people who can make the decisions that keep your software maintainable and kept the people who will slowly over time cause your software to become less maintainable? I'm not sure that tradeoff is a a good one.
gtowey: I would say that it's very germane to my original statement. Understanding is absolutely fundamental to strategy and it is pretty much why I can say LLMs can't be strategic.To really strategize you have to have mental model of well, everything and be able to sift through that model to know what elements are critical or not. And it includes absolutely everything -- human psychology to understand how people might feel about certain features or usage models, the future outlook for what popular framework to choose and will it as viable next year as it is today. The geographic and geopolitics of which cloud provider to use. The knowledge of human sentiment around ethical or moral concerns. The financial outlook for VC funding and interest rates. The list goes on and on. The scope of what information may be relevant is unlimited in time and space. It needs creativity, imagination, intuition, inventiveness, discernment.LLMs are fundamentally incapable of this.
cc-d: Are you the wallflower from the legendary fugg efnet rap videohttps://www.youtube.com/watch?v=evBderwVKKE
gtowey: Can an LLM give you an upfront estimate that a task will take multiple months?Can it decide intelligently what it would have to change if you said "do what you can to have it ready in half the time?"
troupo: > I’m saying I don’t think there’s something special about “big picture” information processing tasks, compared to in-detail information processing tasks, that makes them uniquely impossible for LLM.LLMs can do neither reliably because to do that you need understanding which LLMs don't have. You need to learn from the codebase and the project, which LLMs can't do.On top of that, to have the big picture LLMs have to be inside your mind. To know and correlate the various Google Docs and Figma files, the Slack discussions, the various notes scattered on your system etc.They can't do that either because, well, they don't understand or learn (and no, clawdbot will not help you with that).> The other objections seem mostly to be issues with current tooling, or the sort of capacity problems that the LLM developers are constantly overcoming.These are not limitations of tooling, and no, LLM developers are not even close to overcoming, especially not "constantly". The only "overcoming" has been the gimmicky "1 million token context" which doesn't really work.
kshri24: > I’m starting to think that’s going to be personality and feel and polish, but turned up a notch. That’s what I used to do when I started writing apps, but in some ways I have really toned it down in favor of OS alignment.That's all that is really required. I mean look at the Microslop fiasco. They ruined a perfectly good editor: Notepad with AI slop. But this is not reflecting in their sales. They still are showing record revenues.Just because a competing product exists does not mean your product is suddenly obsolete. There will always be people who will want to buy (provided the market is not oversaturated). Because that is how humans do things. AI won't change that behavior overnight [1]. Look around you and you will see every product you hold in your hand has at least 5-10 competitors.[1] Think about all the things that are still not computerized and which requires you to fill some or the other form of paperwork. We have had computers for over nearly 6 decades now. We STILL have physical forms that we fill from time to time. Computerization was touted to revolutionize this and yet here we are. Still not achieved 100% digitalization. The same will happen with AI as well. There is this initial burst of excitement (which is the phase we are in) until reality sets in and that's when people will learn how to best use the technology. What you are seeing today (vibe coding et all) is NOT IT.
MarcelOlsz: This song bangs, looping this all day. What a throwback. Reminds me of ytcracker/digitalgangster days. Also reminds me of Das Racist. Thanks for posting this lol.
butILoveLife: This might have been true pre-agent AI programming, but honestly the code seems better than ever. It finds edge cases better than me.I know... I know buddy. The world changed and I don't know if I'm going to have a job.
daveguy: Finding edge cases is completely orthogonal to creating maintainable software. Finding edge cases ~= identifying test suites. Making software maintainable ~= minimizing future cost of effective changes.Ignoring future maintenance cost because test suites are easier to create seems like disjointed logic.
butILoveLife: Im not even sure we will need maintain software. I can basically have AI rewrite entire code bases in an hour including testing.Have you use AI Agents? Specifically with SOTA models like Opus.I talked like you 3 weeks ago. But the world changed.
hobs: And the human downstream of this random reorganization of things at will, how do they manage it?If its AI agents all the way down its commoditization all the way down, if humans have to deal with it there's some sort of cost for change even if its 0 for code.
brbrodude: So now we're supposed to become the idea guy? Goddamnit.
zaphar: I'm every bit as immersed in this as you are. I've been developing my own custom claude code plugins that allow me to delegate more and more the agents. But the one thing the agent is not reliably doing for me is making sound architectural choices and maintaining long term business context and how it intersects with those architectural choices.I tried teaching all of that in system prompts and documentation and it blows the context window to an unusable size. As such the things that as a high level experienced senior engineer I have been expected to do pre-agents I am still expected to do.If you are eliminating those people from your business then I don't know that I can ever trust the software your company produces and thus how I could ever trust you.
aspenmartin: > making sound architectural choices and maintaining long term business context and how it intersects with those architectural choices.I completely agree with you, but this is rapidly becoming less and less the case, and would not at all surprise me if even by the end of this year its just barely relevant anymore.> If you are eliminating those people from your business then I don't know that I can ever trust the software your company produces and thus how I could ever trust you.I mean thats totally fine, but do realize many common load bearing enterprise and consumer software products are a tower of legacy tech debt and junior engineers writing terrible abstractions. I don't think this "well how am I going to trust you" from (probably rightfully) concerned senior SWEs is going to change anything. s
switchbak: "Im not even sure we will need maintain software" (sic) - I'm not sure what your specific background is, but with a statement like that you lose all legitimacy to me.
aspenmartin: Writings on the wall, it is true, tech debt will no longer be a thing to care about."but who will maintain it?" massive massive question, rapidly becoming completely irrelevant"but who will review it?" humans sure, with the assistance of ai, writing is also on the wall: AI will soon become more adept at code review than any humanI can understand "losing all legitimacy" being a thing, but to me that is an obvious knee jerk reaction to someone who is not quite understanding how this trend curve is going.
sp1nningaway: Yes! "Does an AI know how to do that? Does a coding assistant know that an app is really a giant collection of details?"There are just so many small decisions that add up to a consistent vision for a piece of software. It doesn't seem like LLMs are going to be able to meaningfully contribute to that in the near future.I tried vibecoding my own workout tracker, but there were so many small details to think through that it was frustrating. I gave up and found an app that is clearly made by a team of experienced, thoughtful people and AI can't replicate the sheer thoughtfulness of every decision that was made to create this app. The inputs for reps/sets, algorithms for adjusting effort on the fly, an exercise library with clear videos and explanations; there's just no way to replicate that without people who have been trainers and sport scientists for decades.LLMs can help increase the speed that these details turn in to something tangible, but you definitely can't "skip all that crap and just jump to the end and get on with it."
pixl97: Saying the tiger has to prove it can eat you is not a great strategy to survive a tiger attack.
bigstrat2003: Well so far the tiger faceplants in an embarrassing fashion every time it tries to eat someone. So I'm not really worried about that.
h3lp: Greg mentions discipline and vision as determinants of successful software, which is correct but I think he misses another aspect of vision: the ability to attract and crystallize a community around their project. Arguably, most successful softwares thrive in the long term because they have a team of people that inspire each other, fill in with complementary talents, and provide continuity.
bigstrat2003: New account... singing the praises of how AI "changed everything" in the past few weeks... my money is on this being a shill.
allreduce: Honestly I feel like something important was yanked from under my feet. Computers were always an escape for me, then a starting career, something that helped me get started in life. Wow, those things I learned while spending nights making things move on the computer are actually worth something. And now a lot of it is just gone, worthless, right when I started having a bit of success. That wasn't supposed to happen.Not that I can complain much, worse things have happened to better people bla bla. But it's disorienting. I still have non-automatable skills and enjoy learning, but who says they are not going to come up with Claude Opus 4.9 or something and turns out it can do that too, ha ha.How is a young person supposed to establish themselves in this new world?