Discussion
Why are executives enamored with AI but ICs aren’t?
kerblang: I'm not gonna say "incorrect" like the absolutists. It's an interesting hypothesis, at least.But I will insist that executives are more driven by FOMO than a teenager.
comrade1234: What's an IC?
hoppyhoppy2: It's explained in the first sentence of the article.
exolymph: People who will get paid more if AI eliminates jobs (in theory, anyway — execs aren't necessarily owners) versus people whose jobs will be eliminated.
AnimalMuppet: No, execs aren't owners, but... if an exec can deliver the same or better results with fewer employees, aren't they a better exec? And if so, aren't they worth more money?(Yeah, I know, there's lots of instances of execs who got paid huge amounts of money and delivered abysmal results...)
paxys: You must be living in a different universe if you think ICs aren't enamored by AI. Every developer I know basically can't operate now without Claude Code (or equivalent).
afavour: I hope that’s exaggeration because being unable to operate without it means you’re going to do a terrible job of reviewing the code it’s producing.
budman1: Individual Contributor. Directly creates work, not a supervisor.
pron: As someone who's both an IC and leads other developers I disagree with the explanation. As a technical lead, with people I can much better predict the quality of the outcome than with LLMs. As a programmer, I am actually more impressed with AI agents but in an informed and qualified way. Their debugging ability wows me; their coding ability disappoints and frustrates me.I think that the simple explanation for why executives are so hyped about AI is simply that they're not familiar with its severe limitations right now. For example, Garry Tan seems to really believe he's generating 10KLOC of working code per day; if he'd been a working developer he would have known he isn't.
sigbottle: What? Doesn't this boil down to "people like people who reliably get results", e.g., we live in a complicated nondeterministic world but we try and make it as deterministic as possible, except for some reason you focus on the nondeterministic part for managers, and "deterministic" part for engineers?Not even sure if determinism is a good axis to analyze this problem. Also smells extremely like concept creep - do you mean "moving up the abstraction stack" as "non determinism" too?
withinboredom: The funny thing is that AI can probably replace the exec’s job before it can replace a devs job.
internet2000: ICs are too.
SirensOfTitan: I think executives are excited about AI because it confirms their worldview: that the work is a commodity and the real value lies in orchestration and strategy.It doesn't help that the west has a clear bias wherein moving "up" is moving away from the work. Many executives often don't know what good looks like at the detail level, so they can't evaluate AI output quality.
try-working: I'm an IC and I love it. Executives have the wrong concept of AI. For them it's chat + magic, and then it does everything. You can't work with people who have incorrect concepts about how the world works. Best ignore them.
kubb: You're right, execs keep trying to fit the LLM square peg into the "inteligent agent" round hole.Developers use it, for groking a codebase, for implementing boilerplate, for debugging. They don't need juniors to do the grunt work anymore, they can build and throw away, the language and technology moats get smaller.The value of low level managers, whose power came from having warm bodies to do the grunt work, diminishes.The bean counters will be like when does it pay for itself. Will it? IDK, IDC.
fcarraldo: AI allows executives to spend R&D to create a flywheel which builds more, faster, without hiring more. It makes every individual employee able to deliver more.ICs dislike this because it raises expectations and puts the spotlight on delivery velocity. In a manufacturing analogy, it’s the same as adding robots that enables workers to pack twice as many pallets per day. You work the same hours, but you’re more tired, and the company pockets the profits.Software Engineers are experiencing, many for the first time in their careers, what happens when they lose individual bargaining power. Their jobs are being redefined, and they have no say in the matter - especially in the US where “Union” is a forbidden word.
booleandilemma: Why would ICs be enamored with something quite literally designed to replace them?
ackshuallytho: Devs think it will save time and execs think it will save money.But because time is money, I think all the benefits go to the dev. The exec still needs the dev regardless
xixixao: What do you think the exec job is? What do they do every day, every working hour? And how will AI replace that?
01100011: What do you work on?I find people tend to omit that on HN and folks dealing with different roles end up yelling at each other because those details are missing. Being an embedded sw engineer writing straight C/ASM is, for instance, quite different from being a frontend engineer. AI will perform quite differently in each case.
orochimaaru: And one executive talks to other executives not to their engineers. I think this is more peer pressure than anything else.
kolinko: Since previous Opus and Claude Code, I found I don't need to read the code any more. Architecture overview sure, and testing, but not reading the code directly any more.The same with my friends, from various subfields of programming.
mystraline: IC is a strange relabeling of a "worker".When you analyze this as "Management loves AI" and "workers hate it" goes completely back to 'who owns the means of production?', and can be clearly seen within Marx's critique.
tayo42: Ic differentiates a lot of more then worker and not worker. Middle management and even a level or two aren't anything special.Ic can refer to people leading, without direct reports, making 500k+ in comp.
strange_quark: Reads like an extended slop LinkedIn post. The author poses a question with an obvious answer yet answers with the most galaxy brain take possible while dropping in some academic concepts to make themselves sound like a thought leader despite probably only taking an intro class in college 10+ years ago.
MarkSweep: In addition to reason in the article, one thing I’ve noticed among some executives and product managers is their experience using LLM coding tools causes them to lose respect for human software engineers. I’ve seen managers lose all respect for engineering excellence and assume anything they want can be shat out by an LLM on a short deadline. Or assume because they were able to vibe code something trivial like a blog they don’t need to involve engineers in the design of anything, rather they should just be code monkeys that follow whatever design the product managers vibed up. It is really demoralizing to be talked to as if the speaker is promoting an LLM.
yalogin: The bigger question is if AI helps cut down the time of development by 10x (assume for this conversation), and the products are released immediately, will companies keep pushing products/functionalities out every week/month? They still have to wait and see adoption, feedback etc to see if it works or not. Sure ai speeds up development but to what end? It’s not like meta is going to compress 5yrs or instagram features into 1! No one has the pipeline built up. So not sure how it fits into the overall company strategy. It’s only helping to fire people now that’s it?
bpt3: It's not. Most developers are pretty bad at their job, and already can't review code very effectively.They just create even more slop currently, which will be the case until someone realizes they aren't needed to produce slop at all.
indistinction: AI has freed me from a vicious cycle (that I had been corralled into as an explicit attrition tactic, which almost ended with me being used non-consensually for reproductive purposes) by not simply eliminating my overpaid bullshit job, but putting an end to its pathetic semblance of a premise.I am finally at liberty to do something worthwhile with my life, and while I'm trying to remember what that even was at this point, I do sleep a rich REM sleep knowing society is now capable of digging its own grave without my assistance.I understand that mine is a minority position; if you happen to be the kind of person who believes that making more of you is inherently good, no matter who you actually are, you and your eventual progeny will be quite safe in your new role as AI trainer (or is it AI fodder, let's let the market decide!)Just one request on my part: if possible, do shut up while turning yourself and the world into paperclips, alright? Besides the ones that you recognize as people, a whole bunch of other people do live on this planet too - and I hear they find all this AI blather to be mighty annoying.
01100011: Eh...In my systems programming job ICs have mostly avoided it because we don't have time to learn a new thing with questionable benefits. A lot of my team are really, really good programmers and like that aspect of the job. They don't want to turn any part of it over to a machine. Now if a machine could save us from ever dealing with Jira...That said, I have begun using AI for some things and it is starting to be useful. It's still 50/50 though, with many hallucinations that waste time but some cases where it caught very simple bugs(syntax or copy/paste errors). I think the experience of, say, systems programmers is very different vs python/web folks though. AI does a great job for my helper scripts in Python.Management needs to take their own medicine though. They continue to refuse to leverage AI to do things it could actually be good at. I give a duplicate status to management 3x/week now. Why? AI could handle tracking and summarizing it just fine. It could also produce my monthly status for me.
bigstrat2003: If those devs can't operate without an LLM, they weren't worth their salt to begin with. I find that most competent devs are skeptical of the tech, because it doesn't help them. But even among those who embrace it, they would get by just fine if it was gone tomorrow.
tayo42: The industry is filled with people who just want to close their tickets and sign off.And plenty of prolific programmers are writing publicly about their Ai use.
yieldcrv: > I think there’s pretty clearly a divide in AI perception between executives and individual contributors (ICs).Narrator: there is not
dataking: An individual contributor. Someone who delivers technical work without managing people. It is an alternative to being promoted into a management role.
LgWoodenBadger: Because it’s a mythical silver bullet of increased output combined with reduced costs.It makes me think of an executive I once reported to who “increased velocity” by changing the utilization rate on a spreadsheet from 75% to 80%.
garciasn: I lead a team of Data Engineers, DevOps Engineers, and Data Scientists. I write code and have done so literally for my entire life. AI-assisted codegen is incredible; especially over the last 3-4m.I understand that developers feel their code is an art form and are pissed off that their life’s work is now a commodity; but, it’s time to either accept it and move on with what has happened, specialize as an actual artist, or potentially find yourself in a very rough spot.
peacebeard: This is definitely part of it.I think another part of it is that AI tools demo really well, easily hiding how imperfect and limited they are when people see a contrived or cherry-picked example. Not a lot of people have a good intuition for this yet. Many people understand "a functional prototype is not a production app" but far fewer people understand "an AI that can be demonstrated to write functional code is not a software engineer" because this reality is rapidly evolving. In that rapidly evolving reality, people are seeing a lot of conflicting information, especially if you consider that a lot of that information is motivated (eg, "ai is bad because it's bad to fire engineers" which, frankly, will not be compelling to some executives out there). Whatever the new reality is going to be, we're not going to find out one step at a time. A lot of lessons are going to be learned the hard way.
DGAP: Well I'm not looking forward to being out of a job and health insurance.
pydry: They're oddly credulous of the shovel salesmen in the gold rush, too.E.g. when Jensen Huang said that you need to pair your $250k engineer with $250k of tokens.
staticassertion: I wonder if your background just has you fooled. I worked on a data science team and code was always a commodity. Most data scientists know how to code in a fairly trivial way, just enough to get their models built and served. Even data engineers largely know how to just take that and deploy to Spark. They don't really do much software engineering beyond that.I'm not being precious here or protective of my "art" or whatever. But I do find it sort of hilarious and obvious that someone on a data science team might not understand the aesthetic value of code, and I suspect anyone else who has worked on such a team/ with such a team can probably laugh about the same thing - we've uh... we've seen your code. We know you don't value aesthetic code lol. Single variable names, `df1`, `df2`, `df3`.I'm not particularly uncomfortable at the moment because understanding computers, understanding how to solve problems, understanding how to map between problems and solutions, what will or won't meet a customer's expectations, etc, is still core to the job as it always has been. Code quality is still critical as well - anyone who's vibe-coded >15KLOC projects will know that models simply can not handle that scale unless you're diligent about how it shoul dbe structured.My job has barely changed semantically, despite rapid adoption of AI.
garciasn: I understand that you’re trying to apply your experience to what we do as a team and that makes sense; but, we’re many many stddev beyond the 15K LOC target you identified and have no issues because we do indeed take care to ensure we’re building these things the right way.
gerdesj: MD here, of a really small company (and I'm not a doctor).I'm (mildly) excited by LLMs because I love a new shiny tool that does appear to have quite some utility.My analogy these days is a screwdriver. Let's ignore screw development for now.The first screwdrivers, which we still use, are slotted and have a habit of slipping sideways and jumping (camming out). That's err before LLMs ... something ... something.Fast forward and we have Philips and Pozi and electric drivers. Yes there were ratchet jobs, and I still have one but the cordless electric drilldriver is nearly as magical as the Dr Who sonic effort! That's your modern LLM that is.Now a modern drilldriver can wrench your wrist if you are not careful and brace properly. A modern LLM will hallucinate like a nineties raver on ecstasy but if you listen carefully and phrase your prompts carefully and ignore the chomping teeth and keep them hydrated, you may get something remarkable out of the creature 8)Now I only use Chat at the totally free level but I do run several on-prem models using ollama and llama.cpp (all compiled from source ... obviously).I love a chat with the snappily named "Qwen3.5-35B-A3B-UD-Q4_K_XL" but I'm well aware that it is like an old school Black and Decker off of the noughties and not like my modern De Walt wrist knackerers. I've still managed to get it to assist me to getting PowerDNS running with DNSSEC and LUA and configuring LACP and port channel/trunking and that on several switch brands.You?
Esophagus4: > AI tools demo really wellYes, and they work really well for small side projects that an exec probably used to try out the LLM.But writing code in one clean discrete repo is (esp. at a large org) only a part of shipping something.Over time, I think tooling will get better at the pieces surrounding writing the code though. But the human coordination / dependency pieces are still tricky to automate.
detourdog: I'm sure an IC is not an integrated circuit or independent contractor.
TheGRS: Validation efforts likely become more necessary, so costs rise in another area. And product managers find they still need someone to translate the requirements well because LLMs are too agreeable. Cost optimization still needs someone to intervene as well.I know there's an attempt to shift the development part from developers to other laypeople, but I think that's just going to frustrate everyone involved and probably settle back down into technical roles again. Well paid? Unclear.
phyzome: So what you're saying is that now the worst devs can produce code faster and their velocity is no longer limited by their incompetence.Why is this supposed to be a good thing?
temp8830: AI is very good at writing C and asm. It even writes good Verilog. Unfortunately.
pigpop: The reasons given in the article are much more compelling.
_dwt: > literallyYou were either a very talented baby or we’re justified in questioning your ability to assess the correctness of nitpicky formalisms.
garciasn: Funny.
skillina: Iterating an existing product takes time, but creating a clean room clone of an existing product could be accelerated significantly with AI. We could be moving towards an environment where bigtech falls back on one of its core competencies (scale) and hoards infra while small startups pay them compute and inference costs to undercut existing consumer-facing software on price.
LowLevelKernel: IC’s aren’t? Really?
narag: It's absolutely replacing their jobs, but not their positions. They use it extensively to create all the paperwork, communications, emails, translations... and they work fine for these tasks so they think it's equally useful for everything.I believe that it's pretty close to the article thesis, just more prosaic.And yes, the AI works great for some programming tasks, just not for everything or completely unsupervised.
TheGRS: Boards aren't exactly dummies either. If they can see their exec isn't necessary I think they'd make moves to eliminate the positions. But that's in a world where reality meets the hype, and I don't think we're there yet. It gets weirder to think that then anyone with access to the tools and some capital could reasonably make their own company to battle it out with the big guys, but that future is a lot hazier.
FrustratedMonky: Is it really a mystery? A hot take?Executives see this as way to replace labor.The labor sees themselves being replaced.This is a story as old as the hills.
skeletoncrew2: I’d posit that AI is good at tasks that managers have to do: it is a world composited primarily of processes and procedures set up by humans, about other humans. In other words it is just like an ai trained on text. At the worker level you have to interact with the real, outside world in some way. If I could have AI take the wheel for every share point tracker management manages to cook up, I’d be raving about it too.
staticassertion: So you understand and you agree and confirm my experience?
garciasn: I have worked at many places and have seen the work of DEs and DSs that is borderline psychotic; but it got the job done, sorta. I have suffered through QA of 10000 lines that I ended up rewriting in less than 100.So, yes; I understand where you’re coming from. But; that’s not what we do.
4d4m: The implementation is harder than watching a few youtube videos on it
bdangubic: > We know you don't value aesthetic code lol. Single variable names, `df1`, `df2`, `df3`.https://degoes.net/articles/insufficiently-polymorphic> My job has barely changed semantically, despite rapid adoption of AI.it's coming... some places move slower than other but it's coming
staticassertion: > https://degoes.net/articles/insufficiently-polymorphiclol this is not why people do "df1", "df2", etc, nor are those polymorphic names but okay.> it's coming... some places move slower than other but it's comingWhat is coming, exactly? Again, as said, I work at a company that has rapidly adopted AI, and I have been a long time user. My job was never about rapidly producing code so the ability to rapidly produce code is strictly just a boon.
lunar_mycroft: I've seen the code these models produce without a human programmer going over the results with care. It's still slop. Better slop than in the past, but slop none the less. If you aren't at minimum reading the code yourself and you're shipping a significant amount of it, you're either effectively the first person to figure out the magic prompt to get the models to produce better code, or you're shipping slop. Personally, I wouldn't bet on the former.
seba_dos1: Yeah, these models have definitely become more useful in the last months, but statements like "I don't need to read the code any more" still say more about the person writing that than about agents.
keeda: Look at any of the large developer surveys out there, AI adoption is up to 80 - 90%; ICs absolutely are enamored with AI too. HN, and social media in general, is largely an echo chamber of the loudest voices that tend to skew negative, but does not reflect the broader reality. If HN were to be believed, most of Big Tech would be dead instead of thriving more than ever.That said, the central point of the TFA is spot-on, though it could be made more generally, as it applies to engineering as well as management: uncertainty rises sharply the higher you climb the corporate and/or seniority ladder. In fact, the most important responsibility at higher levels is to take increasing ambiguity and transform it into much more deterministic roles and tasks that can be farmed out to many more people lower on the ladder.The biggest impact of AI is that most deterministic tasks (and even some suprisingly ambiguous ones) are now spoken for. This happens to be at the bread and butter of the junior levels, and is where most of the job displacement will happen.I would say the most essential skill now is critical thinking, and the most essential personality trait is being comfortable with uncertainty (or as the LinkedInfluencers call it, "having a growth mindset.") Unfortunately, most of our current educational and training processes fail to adequately prepare us for this (see: "grade inflation") so at a minimum the fix needs to start there.
bad_username: I do not think most executives are particularly enamored with AI. They are being mostly driven by the fear of missing out. More precisely, their thought process is: if they bet on AI and fail, they can plausibly claim that it was the technology's fault (not good enough, poorly suited for the business, etc). But if they skip on AI by choice, and their competition succeeds, they will be blamed personally. The more hyped a technology is, the stronger this calculus is for the managers. It's like Pascal's wager in a way.
staticassertion: Yes, but then you said that you do what I'm suggesting is still critical to do, which is maintain the codebase even if you heavily leverage models. " we do indeed take care to ensure we’re building these things the right way."
hiq: > their life’s work is now a commodityWhich parts of it exactly? I've considered for loops and if branches "commodities" for a while. The way you organize code, the design, is still pretty much open and not a solved problem, including by AI-based tools. Yes we can now deal with it at a higher level (e.g. in prompts, in English), but it's not something I can fully delegate to an agent and expect good results (although I keep trying, as tools improve).LLM-based codegen in the hands of good engineers is a multiplier, but you still need a good engineer to begin with.
hiq: I'd understand not reading the code of the system under test, but you don't even read the tests? I'd do that if my architecture and design were very precise, but at this point I'd have spent too much time designing rather than implementing (and possibly uncovering unknown unknowns in the process).> Me (and my friends similarly) inspect code indirectly now - telling agents to write reports about certain aspects of the code and architecture etc.Doesn't this take longer than reading the code?I can see how some of this is part of the future (I remember this article talking about python modules having a big docstring at the top fully describing the public functions, and the author describing how they just update this doc, then regenerate the code fully, never reading it, and I find this quite convincing), but in the end I just want the most concise language for what I'm trying to express. If I need an edge case covered, I'd rather have a very simple test making that explicit than more verbose forms. Until we have formal specifications everywhere I guess.But maybe I'm just not picturing what you mean exactly by "reports".
pron: My problem with the code the agents produce has nothing to do with style or art. The clearest example of how bad it is was shown by Anthropic's experiements where agents failed to write a C compiler, which is not a very hard programming job to begin with if you know compilers, as the models do, but they failed even with an almost too-good-to-be-true level of assistance (a complete spec, thousands of human-written tests, and a reference implementation used as an oracle, not to mention that the models were trained on both the spec and reference implementation).If you look at the evolution of agent-written code you see that it may start out fine, but as you add more and more features, things go horribly wrong. Let's say the model runs into a wall. Sometimes the right thing to do is go back into the architecture and put a door in that spot; other times the right thing to do is ask why you hit that wall in the first place, maybe you've taken a wrong turn. The models seem to pick one or the other almost at random, and sometimes they just blast a hole through the wall. After enough features, it's clear there's no convergence, just like what happened in Anthropic's experiment. The agents ultimately can't fix one problem without breaking something else.You can also see how they shoot themselves in the foot by adding layers upon layers of defensive coding that get so think they themselves can't think through them. I once asked an agent to write a data structure that maintains an invariant in subroutine A and uses it in subroutine B. It wrote A fine, but B ignored the invariant and did a brute-force search over the data, the very thing the data structure was meant to avoid. As it was writing it the agent explained that it doesn't want to trust the invariant established in A because it might be buggy...Another thing you frequently see is that the code they write is so intent on success that it has a plan A, plan B, and plan C for everything. It tries to do something one way and adds contingencies for failure.And so the code and the complexity compound until nothing and no one can save you. If you're lucky, your program is "finished" before that happens.I wish they could write working code; they just don't.[1] But man, can they debug (mostly because they're tenacious and tireless).[1]: By which I don't mean they never do, but you really can't trust them to do it as you can a programmer. Knowing to code, like knowing to fly a plane, doesn't mean sometimes getting the right result. It means always getting the right result (within your capabilities that are usually known in advance in the case of humans).
LogicFailsMe: I think ICs are threatened because they're told from day one how they are at will employees that can be terminated at any time with or without cause. on top of that, places like Amazon extol the virtues of only working on projects that can be completed with entirely fungible staffing and Google tries ever so hard to electroplate this steaming turd of an ideology with Iron pyrite calling fungibles "generalists." So along comes AI coding agents, which I love as an IC because it excels at tedious work I'd rather not have to do in the first place, but I can get why others see it as a threat. But I really think it's no more of a threat than any other empty promise to cut costs with the silver bullet of the month and we just have to let the loudmouths insist otherwise until the industry figures out this isn't a magic black box.
01100011: My experience is that it gets the syntax right but constantly hallucinates APIs and functions that don't exist but sound like they should. It also seems to be tricked by variable names that don't line up with their usage.
adxl: Admit to having drank the koolaide, it is the first step. I wrote an entire system with tech I barely understand (duckdb), next.js etc, made 7 to 10 iterations per day, and multiple new functions and integrations in hours all while doing my main job. What does the code look like ???. It works, I do not care. Can the AI modify it in under 5 minutes, yes. New features that would take minimal a week, got done in 2 to 3 minutes. Did the AI ever complain, no it did not. Anyone who thinks they will be hand coding going forward is completely fooling themselves. The AI tests better than most engineers. When asked it builds flawless test harnesses and even suggests better solutions. Never going back.