Discussion
JavaScript is not available.
AlotOfReading: I'm not surprised by the outages, but I am surprised that they're leaning into human code review as a solution rather than a neverending succession of LLM PR reviewers.I wonder if it's an early step towards an apprenticeship system.
monarchwadia: Interesting. How would it be an early step towards an apprenticeship system?
bilbo0s: You shouldn't be surprised.How else would they train the LLM PR reviewers to their standards?I've never personally been in the position, because my entire career has been in startups, but I've had many friends be in the unenviable position of training their replacements. Here's the thing though, at least they knew they were training their replacements. We could be looking at a potential future where an employee or contractor doesn't realize s/he is actually just hired to generate training data and then be cut.
dude250711: I knew this would happen.Take a perfectly productive senior developer and instead make him be responsible for output of a bunch of AI juniors with the expectation of 10x output.
guessmyname: dupe: https://news.ycombinator.com/item?id=47319273 (10 hrs ago)
Lalabadie: I'm not sure the sustainable solution is to treat an excess of lower-quality code output as the fixed thing to work with, and operationalize around that, but sure.
gtowey: It's the same as the offshoring episode of the early 2000's. There is such a massive financial incentive to somehow make the low quality code work. And they will try to resist the reality that it's a huge net negative for as long as they can.
cobolcomesback: This “mandatory meeting” is just the usual weekly company-wide meeting where recent operational issues are discussed. There was a big operational issue last week, so of course this week will have more attendance and discussion.This meeting happens literally every week, and has for years. Feels like the media is making a mountain out of a mole hill here.
dragonelite: Expect a shitload of AI powered code review products the next 18 months.
AlexeyBrin: You mean like what Anthropic announced yesterday ? Code Review can review your code for $15 - $25 per review.So now, you can speed up using Claude Code and using Code Review to keep it in check.
sethops1: > The response for now? Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off.So basically, kill the productivity of senior engineers, kill the ability for junior engineers to learn anything, and ensure those senior engineers hate their jobs.Bold move, we'll see how that goes.
almostdeadguy: I'm sorry what? Junior engineers can't learn anything without using AI assistants and senior engineer would hate their jobs reviewing more code from their teammates? What reality do people live in now?
bigbuppo: Ugh. The Great Oops has never been closer.
lokar: If this is true, it misunderstands the primary goals of code review.Code review should not be (primarily) about catching serious errors. If there are always a lot of errors, you can’t catch most of them with review. If there are few it’s not the best use of time.The goal is to ensure the team is in sync on design, standards, etc. To train and educate Jr engineers, to spread understanding of the system. To bring more points of view to complex and important decisions.These goals help you reduce the number of errors going into the review process, this should be the actual goal.
happytoexplain: >Junior and mid-level engineers can no longer push AI-assisted code without a senior signing offReview by a senior is one of the biggest "silver bullet" illusions managers suffer from. For a person (senior or otherwise) to examine code or configuration with the granularity required to verify that it even approximates the result of their own level of experience, even only in terms of security/stability/correctness, requires an amount of time approaching the time spent if they had just done it themselves.I.e. senior review is valuable, but it does not make bad code good.This is one major facet of probably the single biggest problem of the last couple decades in system management: The misunderstanding by management that making something idiot proof means you can now hire idiots (not intended as an insult, just using the terminology of the phrase "idiot proof").
steveBK123: Right, code reviews should already have been happening with human written junior code.If AI is a productivity boost and juniors are going to generate 10x the PRs, do you need 10x the seniors (expensive) or 1/10th the juniors (cost save).A reminder that in many situations, pure code velocity was never the limiting factor.Re: idiot prooofing I think this is a natural evolution as companies get larger they try to limit their downside & manage for the median rather than having a growth mindset in hiring/firing/performance.
belval: The unwritten thing is that if you need seniors to review every single change from junior and mid-level engineers, and those engineers are mostly using Kiro to write their CRs, then what stops the senior from just writing the CRs with Kiro themselves?
ChrisArchitect: [dupe] Source: https://news.ycombinator.com/item?id=47319273
burkaman: Source is https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77..., archived at https://archive.is/hLd8X
Someone1234: Thanks for the links. Strangely I cannot get past the arhive.is "I am not a robot" wall. I click it, then it refreshes, I click it again, and then it asks me to find Traffic Lights, and then "I am not a robot," repeat.Maybe I need a bot to do this for me...
skeledrew: Archive link took me right in; always has. Could be because I use NoScript.
jetrink: It could create the right sort of incentives though. If I'm a junior and I suddenly have to take my work to a senior every time I use AI, I'm going to be much more selective about how I use it and much more careful when I do use it. AI is dangerous because it is so frictionless and this is a way to add friction.Maybe I don't have the correct mental model for how the typical junior engineer thinks though. I never wanted to bug senior people and make demands on their time if I could help it.
ritlo: > senior engineer would hate their jobs reviewing more code from their teammatesJesus, yes. Maybe I'm an oddball but there's a limit to how much PR reviewing I could do per week and stay sane. It's not terribly high, either. I'd say like 5 hours per week max, and no more than one hour per half-workday, before my eyes glaze over and my reviews become useless.Reviewing code is important and is part of the job but if you're asking me to spend far more of my time on it, and across (presumably) a wider set of projects or sections of projects so I've got more context-switching to figure out WTF I'm even looking at, yes, I would hate my job by the end of day 1 of that.
RamblingCTO: Who said PR reviews need to solve all the things and result in proof against idiots?So you're saying that peer reviews are a waste of time and only idiots would use/propose them?
ritlo: The only way to see the kinds of speed-up companies want from these things, right now, is to do way too little review. I think we're going to see a lot of failures in a lot of sectors where companies set goals for reduced hours on various things they do, based on what they expected from LLM speed-ups, and it will have turned out the only way to hit those goals was by spending way too little time reviewing LLM output.They're torn between "we want to fire 80% of you" and "... but if we don't give up quality/reliability, LLMs only save a little time, not a ton, so we can only fire like 5% of you max".(It's the same in writing, these things are only a huge speed-up if it's OK for the output to be low-quality, but good output using LLMs only saves a little time versus writing entirely by-hand—so far, anyway, of course these systems are changing by the day, but this specific limitation has remained true for about four years now, without much improvement)
hard24: My prediction is a concorde-like incident is going to shatter trust and make people re-think their expectations of the capabilities of LLMs and their abilities of the present.Essentially something big has to happen that affects the revenue/trust of a large provider of goods, stemming from LLM-use.They wont go away entirely. But this idea that they can displace engineers at a high-rate will.
whateveracct: Juniors could just code things the old fashioned way. It isn't hard. And if they do find it too hard, they aren't cut out for this job.
marginalia_nu: Expert reviews are just about the only thing that makes AI generated code viable, though doing them after the fact is a bit sketchy, to be efficient you kinda need to keep an eye on what the model is doing as its working.Unchecked, AI models output code that is as buggy as it is inefficient. In smaller green field contexts, it's not so bad, but in a large code base, it's performs much worse as it will not have access to the bigger picture.In my experience, you should be spending something like 5-15X the time the model takes to implement a feature on reviewing and making it fix its errors and inefficiencies. If you do that (with an expert's eye), the changes will usually have a high quality and will be correct good.If you do not do that due dilligence, the model will produce a staggering amount of low quality code, at a rate that is probably something like 100x what a human could output in a similar timespan. Unchecked, it's like having a small army of the most eager junior devs you can find going completely fucking ape in the codebase.
locusofself: If you spend 5-15x the time reviewing what the LLM is doing, are you saving any time by using it?
mattschaller: Anyone work with Kiro before? As I understood, it was held as an INTERNAL USE ONLY tool for much longer than expected.
oxqbldpxo: Not fun to work at amazon.com it seems.
hard24: This is incredibly circular lol...
js8: > requires an amount of time approaching the time spent if they had just done it themselvesIt's actually often harder to fix something sloppy than to write it from scratch. To fix it, you need to hold in your head both the original, the new solution, and calculate the difference, which can be very confusing. The original solution can also anchor your thinking to some approach to the problem, which you wouldn't have if you solve it from scratch.
bluGill: Sloppy code that has been around for a while works. It likely has support for edge cases you forgot about. Often the sloppyness is because of those edge cases.
daheza: Create the problem and then create the solution.
happytoexplain: No, but that's the crux of the AI problem in software. Time to write code was never the bottleneck. AI is most useful for learning, either via conversation or by seeing examples. It makes writing code faster too, but only a little after you take into account review. The cases where it shines are high-profile and exciting to managers, but not common enough to make a big difference in practice. E.g AI can one-shot a script to get logs from a paginated API, convert it to ndjson, and save to files grouped by week, with minimal code review, but only if I'm already experienced enough to describe those requirements, and, most importantly, that's not what I'm doing every day anyway.
sdevonoes: Reviewing AI generated code at PR time is a bottleneck. It cancels most of the benefits senior leadership thinks AI offers (delivery speed).There’s also this implicit imbalance engineers typically don’t like: it takes me 10 min to submit a complete feature thanks to Claude… but for the human reviewing my PR in a manual way it will take them 10-20 times that.Edit: at the end real engineers know that what takes effort is a) to know what to build and why, b) to verify that what was built is correctThe inbetweens are needed but they are a byproduct. Senior leadership doesn’t know this, though.
beardedetim: This is what I don't understand about this policy. There's no way a senior has enough spare capacity to be the gate keeper on every PR made by AI below them. So now we are just making it so the senior people use more AI to keep up but now they're to blame for letting it happen.It sounds like a piss poor deal for seniors unless senior engineer now means professional code reviewer.
bink: I've had the same problem for the last few days, just repeated CAPTCHAs.
LogicFailsMe: For the good of the company's future, all code should be reviewed by L10s going forward before they are accepted. They're the only ones with enough skin in the game to know what really matters after all.
throwaw12: If Seniors are going to review every GenAI generated code, how do they keep up with the volume of changes?So you have 2 systems of engineers: Sr- and Sr+1. Both should write code to justify their work and impact2. Sr- code must be reviewed by Sr+What happens:a. Sr+ output drops because review takes their time more and moreb. Sr+ just blindly accepts because of the volume is too high, and they should also do their own workc. Sr+ asks Sr- to slow-down, then Sr- can get bad reviews for the output, because on average Sr+ will produce more codeI think (b) will happen
bs7280: This is also why I think we will enter a world without Jr's. The time it takes for a Senior to review the Jr's AI code is more expensive than if the Sr produced their own AI code from scratch. Factor in the lack of meetings from a Sr only team, and the productivity gains will appear to be massive.Whether or not these productivity gains are realized is another question, but spreadsheet based decision makers are going to try.
czscout: In this scenario, how might one become a senior without first being a junior? Seniors just pop into existence?
happytoexplain: None of that, sorry if I wasn't clear.
rectang: > Expert reviews are just about the only thing that makes AI generated code viableI disagree, in the sense that an engineer who knows how to work with LLMs can produce code which only needs light review.* Work in small increments* Explicitly instruct the LLM to make minimal changes* Think through possible failure modes* Build in error-checking and validation for those failure modes* Write tests which exercise all pathsThis is a means to produce "viable" code using an LLM without close review. However, to your point, engineers able to execute this plan are likely to be pretty experienced, so it may not be economically viable.
marginalia_nu: By the time you're working in increments small enough that it doesn't introduce significant issues, you really might as well write the code yourself.
throw_m239339: Aren't these companies mandating the use of these tools at first place? Juniors aren't the problem.
throw_m239339: Yet another example of vibe coding at scale. You'll have to hire a lot of seniors out of retirement to fix that mess of gigantic proportions... and don't blame "the juniors" for that, they didn't make the decision to allow those tools at first place.
devonbleak: What you're actually going to see is seniors inundated by slop and burning out and quitting because what used to be enjoyable solving of problems has become wading through slop that took 10 minutes to generate and submit but 30+ minutes to understand and write up a critique for it.
smy20011: An outage could cost Amazon ~millions to tens of millions. Most of the time, we want the junior to learn from the outage and fix the process. With AI agent, we can only update the agent.md and hope it will never happen again.
raw_anon_1111: In my experience, inefficient code is rarely the issue outside of data engineering type ETL jobs. It’s mostly architectural. Inefficient code isn’t the reason your login is taking 30 seconds. Yes I know at Amazon/AWS scale (former employee) every efficiency matters. But even at Salesforce scale, ringing out every bit of efficiency doesn’t matter.No one cares about handcrafted artisanal code as long as it meets both functional and non functional requirements. The minute geeks get over themselves thinking they are some type of artists, the happier they will be.I’ve had a job that requires coding for 30 years and before ther I was hobbyist and I’ve worked for from everything from 60 perdón startups to BigTech.For my last two projects (consulting) and my current project, while I led the project, got the requirements, designed the architecture from an empty AWS account (yes using IAC) and delivered it. I didn’t look at a line of code. I verified the functional and non functional requirements, wrote the hand off documentation etc.The customer is happy, my company is happy, and I bet you not a single person will ever look at a line of code I wrote. If they do get a developer to take it over, the developer will be grateful for my detailed AGENTS.md file.
YCpedohaven: You are the reason software is so shitty today. Congrats code monkey.
hard24: "No one cares about handcrafted artisanal code as long as it meets both functional and non functional requirements"Speak for yourself. I don't hire people like you.
raw_anon_1111: And guess what? You probably don’t pay as much as I make now either…
qnleigh: Surely they know all this. They're worried about AI code degrading codebase quality, so they're putting on the brakes.
Clent: Who is the media you're accusing here? This is a twitter post. As far as I can tell they do not work a media company.What is worth being pointed out is how quickly people blame "The Media" for how people use, consume and spread information on social networks.
rectang: That's not my experience — I'm significantly faster while guiding an LLM using this methodology.The gains are especially significant when working in unfamiliar domains. I can glance over code and know "if this compiles and the tests succeed, it will work", even if I didn't have the knowledge to write it myself.
marginalia_nu: That's where the Gell-Mann amnesia will get you though. As much it trips up on the domains you're familiar with, it also trips up in unfamiliar domains. You just don't see it.
rectang: You're not telling me anything I don't know already. Only a person who accepts that they're fallible can execute this methodology anyway, because that's the kind of mentality that it takes to think through potential failure modes.Yes, code produced this way will have bugs, especially of the "unknown unknown" variety — but so would the code that I would have written by hand.I think a bigger factor contributing to unforeseen bugs is whether the LLM's code is likely to be correct:* Is this a domain that the LLM has trained on a lot? (i.e. lots of React code out there, not much in your home-grown DSL)* Is the codebase itself easy to understand and written with best practices? Code which is hard for humans to understand is also hard for an LLM to understand.
belval: I am not in that specific meeting but it made me chuckle that a weekly ops meeting will somehow get media attention. It's been an Amazon thing forever. Wait until the public learns about CoEs!
8note: id.expect COEs to be coming up with AI code action items though, not to have more thorough human checks
Terr_: [delayed]
ardeaver: When I was really early in my career, a mentor told me that code review is not about catching bugs but spreading context (i.e. increasing bus factor.) Catching bugs is a side effect, but unless you have a lot of people review each pull request, it's basically just gambling.The more expensive and less sexy option is to actually make testing easier (both programmatically and manually), write more tests and more levels of tests, and spend time reducing code complexity. The problem, I think, is people don't get promoted for preventing issues.
bluGill: > people don't get promoted for preventing issues.they do - but only after a company has been burned hard. They also can be promoted for their area being enough better that everyone notices.still the best way to a promotion is write a major bug that you can come in at the last moment and be the hero for fixing.
tartoran: That could work but plenty of quiet heros weren’t promoted for fixing critical bugs.
recursive: They fixed it too soon. You have to wait until the effect is visible on someone's dashboard somewhere.
bluGill: You have to make sure it doesn't arrive at you before it is on the dashboard. Otherwise you are why it is blowing up the time to fix a bug metric. Unless you can make the problem so obscure other smart people asked to help you can't figure it out thus making you look bad.
davidclark: The article claims:>He asked staff to attend the meeting, which is normally optional.Is that false? It also discusses a new policy:>Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added.Is that inaccurate? It is good context that this is a regularly scheduled meeting. But, regularly scheduled meetings can have newsworthy things happen at them.
cobolcomesback: It’s not false. But it’s also weaselly worded.Note that the article doesn’t say that he told staff they have to attend the meeting. It says he “asked” staff to attend the meeting. Which again, it’s really really normal for there to be an encouragement of “hey, since we just had an operational event, it would be good to prioritize attending this meeting where we discuss how to avoid operational events”.As for the second quote: senior engineers have always been required to sign off on changes from junior engineers. There’s nothing new there. And there is nothing specific to AI that was announced.This entire meeting and message is basically just saying “hey we’ve been getting a little sloppy at following our operational best practices, this is a reminder to be less sloppy”. It’s a massive nothingburger.
8note: > senior engineers have always been required to sign off on changes from junior engineers.definitely a team by team question. if it was required it would be a crux rule that the code review isnt approved without an l6 approver.
bluGill: Some, but not very much. Writing code is hard. Ai will do a lot of tedious code that you procrastinate writing.
hard24: Also when you are writing code yourself you are implicitly checking it whilst at the back of your mind retaining some form of the entire system as a whole.People seem to gloss over this... As a CEO if people don't function like this I'd be awake at night sweating.
bluGill: Sortof. I work on a system too large for anyone to know the whole thing. Often people who don't know each other do something that will break the other. (Often because of the number of different people - most individuals go years between this)
onion2k: I.e. senior review is valuable, but it does not make bad code good.I suspect that isn't the goal.Review by more senior people shifts accountability from the Junior to a Senior, and reframes the problem from "Oh dear, the junior broke everything because they didn't know any better" to "Ah, that Senior is underperforming because they approved code that broke everything."
AgentOrange1234: Seniors are going to need to hold Juniors to a high bar for understanding and explaining what they are committing. Otherwise it will become totally soul destroying to have a bunch of juniors submitting piles of nonsense and claiming they are blocked on you all the time.
julienchastang: > best practices and safeguards are not yet fully establishedThe way I am working with AI agents (codex) these days is have the AI generate a spec in a series of MD documents where the AI implementation of each document is a bite sized chunk that can be tested and evaluated by the human before moving to the next step and roughly matches a commit in version control. The version control history reflects the logical progression of the code. In this manner, I have a decent knowledge of the code, and one that I am more comfortable with than one-shotting.
almostdeadguy: If we can't spend that much time reviewing code, what are we exactly doing with this AI stuff?I don't disagree, I think reviewing is laborious, I just don't see how this causes any unintended consequences that aren't effectively baked into using an AI assistant.
sdevonoes: But aren’t companies enforcing AI usage? If noy, wait for it
ritlo: Mine's tracking it complete with a leaderboard (LOL) and it's been suggested to me that it'd be in my best interest not to be too low on that list, so I suspect in the back half of the year some sterner conversations and/or pink-slips are going to be coming the way of those who've not caught on that they need to at least be sending some make-work crap to their LLMs every day, even if they immediately throw the output in the metaphorical garbage bin.It's basically an even-more-ridiculous version of ranking programmers by lines-of-code/week.What's especially comical is I've seen enormous gains in my (longish, at this point) career from learning other tools (e.g. expanding my familiarity with Unix or otherwise fairly common command line tools) and never, ever has anyone measured how much I'm using them, and never, ever has management become in any way involved in pushing them on me. It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week. WTF? That kind of thing should be leads' and seniors' business, to spread and encourage knowledge and appropriate tool use among themselves and with juniors, to the degree it should be anyone's business. Seems like yet another smell indicating that this whole LLM boom is built on shaky ground.
to11mtm: > even if they immediately throw the output in the metaphorical garbage bin.Gotta be careful if you do that tho; e.x. Copilot can monitor 'accept' rate, so at bare minimum you'd have to accept the changes than immediately back them out...
ourmandave: I wonder if Copilot can write a commit and backout routine for them.
remarkEon: Other than “don’t hire idiots”, what is the solution to this problem? I agree with you, and this particular systems management issue is not constrained to software.
tcbrah: the funniest part is amazon literally started tying AI usage to performance reviews like 6 months ago and now theyre doing damage control. you cant simultaneously pressure every engineer to use more AI AND be shocked when AI-assisted code breaks prod. pick one lol
ritlo: A related Dirty Secret that's going to become clear from all this is that a very large proportion of code in the wild (yes, even in 2026—maybe not in FAANG and friends, IDK, but across all code that is written for pay in the entire economy) has limited or no automated test coverage, and is often being written with only a limited recorded spec that's usually fleshed out only to the degree needed (very partial) as a given feature is being worked on.What do the relatively hands-off "it can do whole features at a time" coding systems need to function without taking up a shitload of time in reviews? Great automated test coverage, and extensive specs.I think we're going to find there's very little time-savings to be had for most real-world software projects from heavy application of LLMs, because the time will just go into tests that wouldn't otherwise have been written, and much more detailed specs that otherwise never would have been generated. I guess the bright-side take of this is that we may end up with better-tested and better-specified software? Though so very much of the industry is used to skipping those parts, and especially the less-capable (so far as software goes) orgs that really need the help and the relative amateurs and non-software-professionals that some hope will be able to become extremely productive with these tools, that I'm not sure we'll manage to drag processes & practices to where they need to be to get the most out of LLM coding tools anyway. Especially if the benefit to companies is "you will have better tests for... about the same amount of software as you'd have written without LLMs".We may end up stuck at "it's very-aggressive autocomplete" as far as LLMs' useful role in them, for most projects, indefinitely.On the plus side for "AI" companies, low-code solutions are still big business even though they usually fail to deliver the benefits the buyer hopes for, so there's likely a good deal of money to be made selling companies LLM solutions that end up not really being all that great.
slopinthebag: Re. productivity, if LLM's are a genuine boost with 1/3 of the work, neutral 1/3 of the time, and actually worse 1/3 of the time, it's likely we aren't really seeing performance improvements as 1) people are using them for everything and b) we're still learning how to best use them.So I expect over time we will see genuine performance improvements, but Amdahl's law dictates it won't be as much as some people and ceo's are expecting.
marcta: Goodhart's Law strikes again... "When a measure becomes a target, it ceases to be a good measure."
radiator: > Senior leadership doesn’t know this, though.Well, you'd think senior leadership should know how their business and their people work.
tavavex: > It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week.That's because they weren't sold regex as as service by a massive company, while also being reassured by everyone that any person not using at least one regular expression per line of code is effectively worthless and exposes their business to a threat of immediate obsolescence and destruction. They finally found a way to sell the same kind of FOMO to a majority of execs in the software industry.
10xDev: With AI it makes sense to have leaner teams. Being able to go faster requires greater responsibility.
raw_anon_1111: Besides building web apps for internal use, I’m never going to let AI architect something I’m not familiar with. I could care less whether it uses “clean code” or what design pattern it uses. Meaning I will go from an empty AWS account to fully fledged app + architecture because I’ve been coding for 30 years and dealing with every book and cranny of AWS for a decade.But I would never do the same for Azure.
10xDev: A lot of juniors only graduated using these tools. Good luck taking it away from them.Also, you are responsible no matter what tool you use.
radiator: Deming's point 3 (of 14): Cease dependence on inspection to achieve quality. Eliminate the need for massive inspection by building quality into the product in the first place.
mhogers: .agentignore/.agentnotallowed fileforce agents to not touch mission critical things, fail in CI otherwiselet it work on frontends and things at the frontier of the dependency tree, where it is worth the risk
sethops1: This was challenging enough pre AI. Now that everybody has an AI slop button, the life of an effective code reviewer just got so much more miserable.
_wire_: Like bombing a building full of little kids? Oops too late...
marginalia_nu: To be honest, some times it's still beneficial.For fairly straightforward changes it's probably a wash, but ironically enough it's often the trickier jobs where they can be beneficial as it will provide an ansatz that can be refined. It's also very good at tedious chores.
misnome: And spotting stuff in review! Sometimes it’s false positives but on several occasions I’ve spent ~15-30 minutes teaching-reviewing a PR in person, checked afterwards and it matched every one of the points.
raw_anon_1111: No I’m keeping up with the system as a whole because I’m always working at a system level when I’m using AI instead of worrying about the “how”
i_cannot_hack: Your characterization of the event as a simple reminder to follow established best practices is directly contradicted by the briefing note of the meeting, which specifically mentions a lack of best practices related to AI. Which makes me skeptical of your assessment of the situation in general.> Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established”.
simplyluke: The bet from various industry leaders appears to be that the current generation of engineers will be the last who will ever need to think about complex systems and engineering, as the AI will just get good enough to do all of that by the time they retire.
lovich: I think it’s deeper than that because it’s affected more industries than software and already started pre AI.American corporate culture has decided that training costs are someone else’s problem. Since every corporation acts this way it means all training costs have been pushed onto the labor market. Combine that with the past few decades of “oops, looks like you picked the wrong career that took years of learning and/or 10 to 100s of thousands of dollars to acquire but we’ve obsoleted that field” and new entrants into the labor market are just choosing not to join.Take trucking for example. For the past decade I’ve heard logistics companies bemoan the lack of CDL holders, while simultaneously gleefully talk about how the moment self driving is figured out they are going to replace all of them.We’re going to be outpaced by countries like China at some point because we’re doing the industrial equivalent of eating our seed corn and there is seemingly no will to slow that trend down, much less reverse it.
ansibsha: > better-specified softwareCode is the most precise specification we have for interfacing with computers.