Discussion
rsynnott: > boilerplateRuby on Rails and its imitators blew away tons of boilerplate. Despite some hype at the time about a productivity revolution, it didn’t _really_ change that much.> , libraries, build-tools,Ensure what you mean by this; what bearing do our friends the magic robots have on these?> and refactoringAgain, IntelliJ did not really cause a productivity revolution by making refactoring trivial about 20 years ago. Also, refactoring is kind of a solved problem, due to IntelliJ et al; what’s an LLM getting you there that decent deterministic tooling doesn’t?
rileymichael: couldn't have said it better. all of the people clamoring on about eliminating the boilerplate they've been writing + enabling refactoring have had their heads in the sand for the past two decades. so yeah, i'm sure it does seem revolutionary to them!
gamblor956: AI tooling does not provide productivity gains unless you consider it productive to skip the boilerplate portion of software development, which you can already do by using a framework, or you never plan to get past the MVP stage of a product, as refactoring the AI spaghetti would take several magnitudes more work than doing it with humans from the beginning.Amazon has demonstrated that it takes just as longer, or longer, to have senior devs review LLM output than it would to just have the senior devs do the programming in the first place. But now your senior devs are wasted on reviewing instead of developing or engineering. Amazon, Microsoft, Google, Salesforce, and Palantir have all suffered multiple losses in the tens of millions (or more) due to AI output issues. Now that Microsoft has finally realized how bad LLMs really are at generating useful output, they've begun removing AI functionality from Windows.Product quality matters more than time to market. Especially in tech, the first-to-market is almost never the company that dominates, so it's truly bizarre that VCs are always so focused on their investments trying to be first to market instead of best to market.If Competitor Y just fired 90% of their developers, I would have a toast with my entire human team. And a few months later, we'd own the market with our superior product.
ashwinnair99: The companies quietly doing the firing will say they're doing the building. The answer you get depends entirely on who you ask and what they're trying to justify.
lateforwork: You still need humans to manage the whole lifecycle, including monitoring the live site, being on-call, handling incidents, triaging bugs, deploying fixes, supporting users and so on.For greenfield development you don't need as many software engineers. Some developers (the top 10%) are still needed to guide AI and make architectural decisions, but the remaining 90% will work on the lifecycle management task mentioned above.The productivity gains can be used to produce more software, and if you are able to sell the software you produce should result in a revenue boost. But if you produce more than you can sell then some people will be laid off.
hirako2000: Assuming you are primarily selling software.Situation a/ llm increase developer's productivity: you hire more developers as you cash profit. If you don't your competitor will.b/ llm doesn't increase productivity, you keep cruising. You rejoice seeing some competitors lay off.Reality shows dissonance with these only possible scenarios. Absurd decision making, a mistake? No mistake. Many tech companies are facing difficulties, they need to lose weight to remain profitable, and appease the shareholders demand for bigger margins.How to do this without a backlash? Ai is replacing developers, Anthropic's CEO said engineers don't write code anymore, role obsolete in 6 months. It naturally makes sense we have to let some of them go. If the prophecy doesn't turn true, nobody ever get fired for buying IBM.
maccard: There have been a handful of leaps - copilot was able to look at open files and stub out a new service in my custom framework, including adding tests. It’s not a multiplier but it certainly helps
rileymichael: most frameworks have CLIs / IDE plugins that do that same deterministically. i've built many in house versions for internal frameworks over the years. if you were writing boilerplate, it's not because that was the only option until now
animal531: I use it near daily and there is definitely a positive there, BUT its nothing like what the OP statement would make it up to be.If it is writing both the code and the tests then you're going to find that its tests are remarkable, they just work. At least until you deploy to a live state and start testing for yourself, then you'll notice that its mostly only testing the exact code that it wrote, its not confrontational or trying to find errors and it already assumes that its going to work. It won't ever come up with the majority of breaking cases that a developer will by itself, you will need to guide it. Also while fixing those the odds of introducing other breaking changes are decent, and after enough prompts you are going to lose coherency no matter what you do.It definitely makes a lot of boilerplate code easier, but what you don't notice is that its just moving the difficult to find problems into hidden new areas. That fancy code that it wrote maybe doesn't take any building blocks, lower levels such as database optimization etc. into account. Even for a simple application a half-decent developer can create something that will run quite a bit faster. If you start bringing these problems to it then it might be able to optimize them, but the amount of time that's going to take is non-negligible.It takes developers time to sit on code, learn it along with the problem space and how to tie them together effectively. If you take that away there is no learning, you're just the monkey copy-pasting the produced output from the black box and hoping that you get a result that works. Even worse is that every step you take doesn't bring you any closer to the solution, its pretty much random.So what is it good for? It can both read, "understand", translate, write and explain things to a sufficient degree much faster than us humans. But if you are (at the moment) trusting it at anything past the method level for code then you're just shooting yourself in the foot, you're just not feeling the pain until later. In a day you can have it generate for example a whole website, backend, db etc. for your new business idea but that's not a "product", it might as well be a promotional video that you throw away once you've used it to impress the investors. For now that might still work, but people are already catching on and beginning to wise up.
anon7725: This is the most insightful comment in the thread.
KellyCriterion: Third option:Not hiring someone?
epicureanideal: And for a lot of AI transformation tasks, for a long time I've been using even clever regex search/replace, and with a few minutes of small adjustment afterward I have a 100% deterministic (or 95% deterministic and 5% manually human reviewed and edited) process for transforming code. Although of course I haven't tried that cross-language, etc.And of course, we didn't see a massive layoff after the introduction of say, StackOverflow, or DreamWeaver, or jQuery vs raw JS, Twitter Bootstrap, etc.
rileymichael: structural search and replace in intellij is a superpower (within a single repo). for polyrepo setups, openrewrite is great. add in an orchestrator (simple enough to build one like sourcegraph's batch changes) and you can manage hundreds of repositories in a deterministic, testable way.
marcyb5st: If I were to run the company potentially mostly focus on better products with the exception of firing those that don't adopt the technology.If it is a big company the answer is and will always be: whatever makes the stock price rise the most.
elvis10ten: > with the exception of firing those that don't adopt the technology.This is a crazy take. Even if said people are matching or exceeding the outcome of those using the technology?I’m not in this group. But the closest analog to what you are saying is firing people for not using a specific IDE.
bendmorris: It's disappointing that this is clearly being downvoted due to disagreement - it's a valid perspective. We have very little evidence of the overall impact of aggressively generating code "in the wild" and plenty of bad examples. No one knows what this ends up looking like as it continues to meet reality but plenty are taking a large productivity improvement as a given.
lordkrandel: Why do people keep ralking about AI as it actually worked? I still don't see ANY proof that it doesn't generate a total unmaintainable unsecure mess, that since you didn't develop, you don't know how to fix. Like running a F1 Ferrari on a countryside road: useless and dangerous
tyleo: Because it's working for a lot of people. There are people getting value from these products right now. I'm getting value myself and I know several other folks at work who are getting value.I'm not sure what your circumstances are but even if it's not true for you, it's true for many other people.
pydry: It's interesting that the people IRL I encounter who "get the most value" tend to be the devs who couldnt distinguish well written code from slop in the first place.People online with identical views to them all assure me that theyre all highly skilled though.Meanwhile I've been experimenting using AI for shopping and all of them so far are horrendous. Cant handle basic queries without tripping over themselves.
hermannj314: If you are a 2600 chess player, a bot that plays 1800 chess is a horrendous chess player.But you can understand why all the 1700 and below chess players say it is good and it is making them better using it for eval?Don't worry, AI will replace you one day, you are just smarter than most of us so you don't see it yet.
HoyaSaxa: I think most public companies will take the short term profits and startups will be given a huge opportunity to take market share as a result.At my company, we are maintaining our hiring plan (I'm the decision maker). We have never been more excited at our permission to win against the incumbents in our market. At the same time, I've never been more concerned about other startups giving us a real run. I think we will see a bit of an arms race for the best talent as a result.Productivity without clear vision, strategy and user feedback loops is meaningless. But those startups that are able to harness the productivity gains to deliver more complete and polished solutions that solve real problems for their users will be unstoppable.We've always seen big gains by taking a team of say 8 and splitting it into 2 teams of 4. I think the major difference is that now we will probably split teams of 4 into 2 teams of 2 with clearer remits. I don't want them to necessarily delivery more features. But I do want them to deliver features with far fewer caveats at a higher quality and then iterate more on those.Humans that consume the software will become the bottlenecks of change!
garciasn: My team has been all-in on AI-assisted development for the last ~9 months. We are flooding the top of the product funnel with TONS of new MVPs that are being deployed local > test > dev > staging > prod. We have shipped two entirely new products at a level we've never before been able to do with more enjoyment and no additional staff.But; the most important thing is that we're now rolling CC across the entire company. We want EVERYONE building tooling and sharing it for potential inclusion into the product funnel. What used to be a laborious process that had SIGNIFICANT human capital limitation is now open to all and we're rapidly developing features to roll across the company and our clients' that have direct and meaningful impact.It's fascinating to watch as non-developers get EXCITED to build. At all of the companies I have worked at over my 25y career, the biggest barriers were always translating what was explained by a SME to something usable. Now the SMEs have the ability to play, build, and deliver something that is closer to what they had in their mind but weren't able to explain to others. This allows for us to deliver outstanding things in a new way faster than ever before.
jjmarr: It depends on the relative value of experience/skill to your team.If your team is "throw juniors into the enterprise boilerplate coal mine" and you expect talent to eventually quit, then laying people off might be the right move.If your team is "highly skilled devs try to invent new products", then you should focus on shipping more.
maccard: Habe they? I’ve used tools that mostly do it, but they require manually writing templates for the frameworks. In internal apps my experience has been these get left behind as the service implementations change, and it ends up with “copy your favourite service that you know works”.
rileymichael: > they require manually writing templates for the frameworksthe ones i've used come with defaults that you can then customize. here are some of the better ones:- https://guides.rubyonrails.org/v3.2/getting_started.html#cre...- https://hexdocs.pm/phoenix/Mix.Tasks.Phx.Gen.html- https://laravel.com/docs/13.x/artisan#stub-customization- https://learn.microsoft.com/en-us/aspnet/core/fundamentals/t...> my experience has been these get left behind as the service implementations changeyeah i've definitely seen this, ultimately it comes down to your culture / ensuring time is invested in devex. an approach that helps avoid drift is generating directly from an _actual_ project instead of using something like yeoman, but that's quite involved
roncesvalles: The analogy of thinking about coding AI like it's chess AI is terrible. If chess AI was at the level of coding AI, it wouldn't win a single game.This is actually a big reason why execs are being misinformed into overestimating LLM abilities.LLM coding agents alone are not good enough to replace any single developer. They only make some dev x% faster. That dev who is now x% faster may then allow you to lay off another dev. That is a subtle yet important difference.
kakacik: Don't break the gravy train! These discussions here are beautiful echo chambers.I feel like most folks commenting uncritically here about second coming of Jesus must work in some code sweatshops, churning eshops or whatever is in vogue today quickly and moving on, never looking back, never maintaining and working on their codebases for decade+. Where I existed my whole career, speed of delivery was never the primary concern, quality of delivery (which everybody unanimously complaints one way or another with llms) was much more critical and thats where the money went. Maybe I just picked up right businesses, but then again I worked for energy company, insurance, telco, government, army, municipality, 2 banks and so on.If I would be more junior, I would feel massive FOMO from reading all this (since I use it so far just as a quicker google/stackoverflow and some simpler refactoring, but oh boy is is sometimes hilariously wrong). I am not, thus I couldn't care less. Node.js craze, 10x stronger, not seeing forest for the trees.
maccard: > try it, you might actually be amazed.I keep being told this and the tools keep falling at the first hurdle. This morning I asked Claude to use a library to load a toml file in .net and print a value. It immediately explained how it was an easy file format to parse and didn’t need a library. I undid, went back to plan mode and it picked a library, added it and claimed it was done. Except the code didn’t compile.Three iterations later of trying to get Claude to make it compile (it changed random lines around the clear problematic line) I fixed it by following the example in the readme, and told Claude.I then asked Claude to parse the rest of the toml file, whereby it blew away the compile fix I had made..This isn’t an isolated experience - I hit these fundamental blocking issues with pretty much every attempt to use these tools that isn’t “implement a web page”, and even when it does that it’s not long before it gets tangled up in something or other…
wrs: I’m honestly baffled by this. I don’t want to tell you “you’re holding it wrong” but if this is your normal experience there’s something weird happening.Friday afternoon I made a new directory and told Claude Code I wanted to make a Go proxy so I could have a request/callback HTTP API for a 3rd party service whose official API is only persistent websocket connections. I had it read the service’s API docs, engage in some back and forth to establish the architecture and library choices, and save out a phased implementation plan in plan mode. It implemented it in four phases with passing tests for each, then did live tests against the service in which it debugged its protocol mistakes using curl. Finally I had it do two rounds of code review with fresh context, and it fixed a race condition and made a few things cleaner. Total time, two hours.I have noticed some people I work with have more trouble, and my vague intuition is it happens when they give Claude too much autonomy. It works better when you tell it what to do, rather than letting it decide. That can be at a pretty high level, though. Basically reduce the problem to a set of well-established subproblems that it’s familiar with. Same as you’d do with a junior developer, really.
shireboy: Similar. I regularly use Github copilot (with claude models sometimes) and it works amazingly. But I see some who struggle with them. I have sort of learned to talk to it, understand what it is generating, and routinely use to generate fixes, whole features, etc. much much faster than I could before.
JakeStone: If Claude ends up grabbing my C# TOML library, in my defense, I wrote it when the TOML format first came out over a dozen years ago, and never did anything more with it Sorry.
krastanov: This is fascinating to me. I completely believe you and I will not bother you with all the common "but did you try to tell it this or that" responses, but this is such a different experience from mine. I did the exact same task with claude in the Julia language last week, and everything worked perfectly. I am now in the habit of adding "keep it simple, use only public interfaces, do not use internals, be elegant and extremely minimal in your changes" to all my requests or SKILL.md or AGENTS.md files (because of the occasional failure like the one you described). But generally speaking, such complete failures have been so very rare for me, that it is amazing to see that others have had such a completely different experience.
jayd16: You say it doesn't fail but you also mention all these work around you know and try...sounds like it fails a lot but your tolerance is different.
zorak8me: Providing instruction and context doesn’t seem like a “workaround”.
ritzaco: How much software do you need and how many computers are there to run it on?After combine harvester, we produced the same food with less people.At the moment, it seems like hardware is the constraint. Companies don't have access to enough machines or tokens to keep all their devs occupied, so they let some go. Maybe that changes, maybe we already have too much software?Personally I think we already had too much software before LLMs and even without them many devs would have found themselves jobless as startups selling to startups selling to startups failed and we realized (again) that food, shelter, security, education etc are 'real' industries, but software isn't one if it's not actively helping one of those.
maccard: Sorry - I’m aware that rails/dotnet have these built into visual studio and co, but my point was about our custom internal things that are definitely not IDE integrated.> it comes down to ensuring time is invested in devexThat’s actually my point - the orgs haven’t invested in devex buyt that didn’t matter because copilot could figure out what to do!
roncesvalles: Did you use the best model available to you (Opus 4.6)? There is a world of difference between using the highest model vs the fast one. The fast ones are basically useless and it's a shame that all these tools default to it.
fweimer: To make this more confusing, there is a fast mode of Opus 4.6, which (as far as I understand it) is supposed to deliver the same results as the standard mode. It's much more expensive, but advertised as more interactive.
hexer303: I find that the arguments of whether or not AI boosts productivity is not very productive.The more grounded reality is that AI coding can be a productivity multiplier in the right hands, and a significant hindrance in the wrong hands.Somewhere there exists a happy medium between vibe coding without ever looking at the code, and hand-writing every single line.
maccard: > I have noticed some people I work with have more trouble, and my vague intuition is it happens when they give Claude too much autonomyWhat’s giving too much autonomy about“Please load settings.toml using a library and print out the name key from the application table”? Even if it’s under specified, surely it should at least leave it _compiling_?I’ve been posting comments like this monthly here, my experience has been consistently this with Claude, opencode, antigravity, cursor, and using gpt/opus/sonnet/gemini models (latest at time of testing). This morning was opus 4.6
jmull: > or the boilerplate, libraries, build-tools, and refactoringIf your dev group is spending 90% of their time on these... well, you'd probably be right to fire someone. Not most of the developers but whoever put in place a system where so much time is spent on overhead/retrograde activities.Something that's getting lost in the new, low cost of generating code is that code is a burden, not an asset. There's an ongoing maintenance and complexity cost. LLMs lower maintenance cost, but if you're generating 10x code you aren't getting ahead. Meanwhile, the cost of unmanaged complexity goes up exponentially. LLMs or no, you hit a wall if you don't manage it well.
ekkeke: There definitely isn't enough _specialised_ software. If you look at engineering tools the open source/free stuff is just not good, and the professional software packages run in the tens of thousands per user. I'm eagerly awaiting the day we get competitive open source alternatives to simulink, HFSS, autocad, solidworks, LT Spice, etc.Unfortunately this kind of software needs specialised domain knowledge to produce that AI doesn't have yet, but when (if) it arrives I hope we see strides forwards in hardware engineering productivity.
scuderiaseb: I have found it very useful in pointing me in the right direction, exploring codebases and writing up boiler plate or other scaffolding. However I do need to review and test, and at the end of the day it’s me who’s responsible for the code. So I only make it write parts of code at a time, not oneshotting a new Github company.
ppqqrr: false dichotomy. neither company will make it. the winner will be some solo dev with a singular vision and ruthless dedication to quality. “teams” do not have visions, individuals do. the more people are involved in a product, the blurrier the vision becomes, the more insidious the vested interest of the organization to profit from the user, rather than align themselves with the user. solo devs do not have such problems, because they’re no different from users, aside from the fact that they were users before the software existed. they make the software because they want to use it, and there is no amount of payroll employees that can replicate the quality and innovation that results from this simple, genuine self-interest.