Discussion
VS Notes
CraftingLinks: I see whole teams pushed by c- level going full in with spec driven + tdd development. The devs hate it because they are literally forbidden to touch a single line if code. but the results speak for themselves, it just works and the pressure has shifted to the product people to keep up. The whole tooling to enable this had to be worked out first. All Cursor and extreme use of a tool called Speckit, connected to Notion to pump documentation and Jira.
bensyverson: This "slot machine" metaphor is played out. If you're just entering a coin's worth of information and nudging it over and over in the hopes of getting something good, that's a you problem, not a Claude problem.If, on the other hand, you treat it like a hyper-competent collaborator, and follow good project management and development practices, you're golden.
james2doyle: _hyper-competent collaborator who may completely make things up occasionally and will sometimes give different answers to the same question*_
jsLavaGoat: Everything is "fast, cheap, good--pick two." This is no different.
Retr0id: > But now either the AI can handle it or it can pretend to handle it. Frankly it's pretending both times, but often it's enough to get the result we need.This has been how I think about it, too. The success rates are going up, but I still view the AI as an adversary that is trying to trick me into thinking it's being useful. Often the act is good enough to be actually useful, too.
mjburgess: The first anthropomorphization of AI which is actually useful.
Retr0id: It's not even an anthropomorphization, the reward function in RLHF-like scenarios is usually quite literally "did the user think the output was good"
simonw: Assigning work to an intern is gambling: they're inherently non-deterministic and it's a roll of the dice whether the work they do will be good enough or you'll have to give them feedback in order to get to what you need.
lunar_mycroft: 1. Interns learn. LLMs only get better when a new model comes out, which will happen (or not) regardless of whether you use them now.2. Who here thinks that having interns write all/almost all of your code and moving all your mid level and senior developers to exclusively reviewing their work and managing them is a good idea?
rustyhancock: Life is full of variable reward schemes. Probably why we evolved to be so enamoured by them.In a healthy environment. We are harmed more by being totally risk adverse. Than by accepting risk as part of life and work.
samschooler: I think there are levels to this.- One shot or "spray and pray" prompt only vibe coding: gambling.- Spec driven TDD AI vibe coding: more akin to poker.- Normal coding (maybe with tab auto complete): eating veggies/work.I like to sit in the "spec driven" bucket.Notably though gambling has the massive downside of losing your entire life and life savings. Being in the "vibe coding" bucket's worse case is being insufferable to your friends and family, wasting your time, and spending $200/month on a max plan.
parliament32: You remind me of those guys who swear they have a "system" at the casino.
rustyhancock: Life is full of variable reward schemes. Probably why we evolved to be so enamoured by them.Sometimes I think we put the Carr before the horse. We gamble because evolution promotes that approach.Yes I could go for the reliable option. But taking a punt is worth a shot if the cost is low.The cost of AI is low.What is a problem is people getting wrapped up in just one more pull of the slot machine handle.I use AI often. But fairly often I simply bin its reponse and get to work on my own. A decent amount of the time I can work with the response given to make a decent result.Sometimes, rarely, it gives me what I need right off the bat.
minimaxir: The gambling metaphor often applied to vibecoding implies that the outcome cannot be fully controlled or influenced, such as a slot machine. Opus 4.5 and beyond show that it not only can be very much can be influenced, but also it can give better results more consistently with the proper checks and balances.
Retr0id: Poker is a skill-based game where your actions influence your success, but many people who play it are gambling.
bensyverson: And that's why poker is a poor metaphor for agentic coding.
mathrawka: As someone who has worked with interns for year, expect feedback and reiterations always, be surprised if they get it the first time... which merits feedback as well!But looks like the intern mafia is bombarding you with downvotes.
wagwang: > I divide my tasks into good for the soul and bad for it. Coding generally goes into good for the soul, even when I do it poorly.Lmk how you feel when you're constantly build integrations with legacy software by hand.
thisisbrians: It is and will always be about: 1) properly defining the spec 2) ensuring the implementation satisfies said spec
ambicapter: Then pulling the lever until it works! You can also code up a little helper to continuously pull the lever until it works!
__MatrixMan__: Inductive reasoning of any kind (e.g. the scientific method) is gambling.
rawgabbit: [delayed]
some_random: How often do you have to win before it's no longer gambling?
tonymet: we're winning so much we started complaining "I can't handle so much winning"
smlacy: I like the analogy but which 2 is AI coding?Fast & Cheap (but not Good?) - I wouldn't really say that AI coding is "cheap"Cheap & Good (but not Fast) - Again, not really "cheap"Fast & Good (but not Cheap) - This seems like maybe where we're at? Is this a bad place?
tonymet: As always, scope the changes to no larger than you can verify. AI changes the scale, but not the strategy.Now you have more resources to test, reduce permissions scope, to build a test bench & procedure. All of the excuses you once had for not doing the job right are now gone.You can write 10k + lines of test code in a few minutes. What is the gamble? The old world was a bigger gamble.
RealityVoid: > literally forbidden to touch a single line if code.That is extremely stupid. What does that ban get you? I reqct to this because a friend mentioned exactly this. And I was dumbfounded.
comboy: > That is extremely stupid. What does that ban get you?confidence in firing coders I presume..
copypaper: You got to know when to Ship it,Know when to Re-prompt,Know when to Clear the Context,And know when to RLHF.You never trust the Output,When you’re staring at the diff view,There’ll (not) be time enough for Fixing,When the Tokens are all spent.
LetsGetTechnicl: Yes, that's literally how LLM's work, they're probabilistic.
yoyohello13: I was just thinking about this. I was reading those tweets about the SV party were people were going home early to “check on their agents” or the “token anxiety” people are having over whether they are optimizing their agent usage. This is all giving me addiction vibes. Especially at the end of the day it seems like there is not much to show for it.
rob_c: So.Is.Life.You've discovered probability, there was an 80% change of that. Roll a dice and do not pass go.Again. The output from llm is a probable solution, not right, not wrong.
comboy: Fascinating how HN is torn about vibe coding still. Everybody pretty much agrees that it works for some use cases, yet there is a flamewar (I mean, cultured, HN-type one) every time. People seem to be more comfortable in a binary mindset.
zer00eyz: VIM vs Emacs vs IDE vs..., Tabs vs Spaces, Procedural vs OOP vs Functional.We love a good holy war for sure.The nuance is lost, and the conversations we should be having never happen (requirements, hiring/skills, developer experience).
CodingJeebus: Personally, I get a huge rush of dopamine seeing LLMs build out complex features very quickly to the point that it will keep me up all night wanting to push further and further.That's where the gambling metaphor really resonates. It's not whether or not the output is correct, I've been building software for many years and I know how direct LLMs pretty well at this point. But I'm also an alcoholic in recovery and I know that my brain is wired differently than most. And using LLMs has tested my ability to self-regulate in ways that I haven't dealt with since I deleted social media years ago.
acedTrex: > Personally, I get a huge rush of dopamine seeing LLMs build out complex features very quicklyI dont think i've read a sentence on this website i can relate to less.I watch the LLM build things and it feels completely numb, i may as well be watching paint dry. It means nothing to me.
davidkhess: One way it works is if you think of cognitive debt as the "house". As in "the house always wins".
natpalmer1776: It also doesn’t help that producing features is also wired to a sense of monetary compensation. More-so if you’re building a product to sell that might finally be your ticket to whatever your perception of socio-economic victory is.
CodingJeebus: That's definitely part of it, sure. I also just get a cosmic kick out thinking about the possibilities that this technology unlocks and that thinking can spiral in all sorts of unhealthy ways.
reaperducer: it can give better results more consistently with the proper checks and balances.You can get more consistent results from a slot machine with a bunch of magnets and some swift kicks. It's still gambling.
c_e: everybody who's playing poker is gambling, skilled or not.
throwmeaway820: without a rigorous definition of "gambling", such discussions are pointless
vidarh: I was in a call just today where specs were presented as a new thing.
DiscourseFan: When a code doesn't compile, it doesn't kill anyone. But if a Waymo suddenly veers off the road, it creates a real threat. Waymos had to be safer than real human drivers for people to begin to trust them. Coding tools did not have to be better than humans for them to be adopted first. Its entirely possible for a human to make a catastrophic error. I imagine in the future, it will be more likely that a human makes such errors, just like its more likely that a human will make more errors driving a car.
Verdex: My understanding is that waymo has gone on the record to say that they have human operators that remotely drive the vehicle in scenarios where their automated system is confused.Which I assert is semantically equivalent to saying: Human drivers (even when operating at the diminished capacity of not even being present in the car) are less likely to make errors driving a car than AIs.
watzon: I think this article makes a valid point. However, if AI coding is considered gambling, then being a project manager overseeing multiple developers could also be seen as a form of gambling to a certain degree. In reality, there isn't much difference between the two. AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results.
post-it: > But this doesn't really resemble coding. An act that requires a lot of thinking and writing long detailed code.Does it? It did in the past. Now it doesn't. Maybe "add a button to display a colour selector" really is the canonical way to code that feature, and the 100+ lines of generated code are just a machine language artifact like binary.> But it robs me of the part that’s best for the soul. Figuring out how this works for me, finding the clever fix or conversion and getting it working. My job went from connecting these two things being the hard and reward part, to just mopping up how poorly they’ve been connected.Skill issue. Two nights ago, I used Claude to write an iOS app to convert Live Photos into gifs. No other app does it well. I'm going to publish it as my first app. I wouldn't have bothered to do it without AI, and my soul feels a lot better with it.
bensyverson: So, indistinguishable from a human then
koolba: > When you’re staring at the diff view,Bold assumption that people are looking at the diffs at all. They leave that for their coworkers agents.
nickjj: > properly defining the specWhy do you often need to re-prompt things like "can you simplify this and make it more human readable without sacrificing performance?". No amount of specification addresses this on the first shot unless you already know the exact implementation details in which case you might as well write it yourself directly.I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worthy of being git commit.I sometimes use AI for tiny standalone functions or scripts so we're not talking about a lot of deeply nested complexity here.
giancarlostoro: There's two secret sauces to making Claude Code your b* (please forgive me future AI overlords), one is to create a spec, the other is to not prompt merely "what" you want and only what you want, but what you want, HOW you want it done (you can get insanely detailed or just vague enough), and even in some cases the why is useful to know and understand, WHO its for sometimes as well. Give it the context you know, don't know anything about the code? Ask it to read it, all of it, you've got 1 million tokens, go for it.I have one shot prompted projects from empty folder to full feature web app with accounts, login, profiles, you name it, insanely stable, maybe and oops here or there, but for a non-spec single prompt shot, that's impressive.When I don't use a tool to handle the task management I have Claude build up a markdown spec file for me and specify everything I can think of. Output is always better when you specify technology you want to use, design patterns.
itsgrimetime: All of this new capability has made me realize that the reason i love programming _isn't_ the same as the OP. I used to think (and tell others) that I loved understanding something deeply, wading through the details to figure out a tough problem. but actually, being able to will anything I can think of into existence is what I love about programming. I do feel for the people who were able to make careers out of falling in love w/ and getting good at picking problems & systems apart, breaking them down, and understanding them fully. I respect the discipline, curiosity, and intellect they have. but I also am elated w/ where things are at/going. this feels absurd to say, but I finally feel like I'm _good_ at programming, which is insane, because I literally haven't written a line of code myself in months, but having tools that can finally match the speed my ideas come to me is intoxicating
bluefirebrand: > but I finally feel like I'm _good_ at programming, which is insane, because I literally haven't written a line of code myself in monthsThis is exactly the sort of mentality that makes me hate this technologyYou finally feel good at programming despite admitting that you aren't actually doing itPlease explain why anyone should take this seriously?
deepfriedrice: It's the perfect metaphor? Playing correctly/optimally is +EV. But nobody starts there, and many people don't ever get there.The main difference is that you're exploiting your own weaknesses, rather than others'. Limitations in typing speed, information gathering, pattern recognition.
m00x: AI coding is gambling on slot machines, managing developers is betting on race horses.
pdntspa: Because the programming is and was always a means to an end. Obsessing over the specific mechanical act of programming is taking the forest for the trees.I agree with gp that the speed in which I am able to execute my vision is exhilarating. It is making me love programming again. My side projects, which have been hanging on the wall for years, are actually getting done. And quickly!The actual act of keying in code is drudgery for me. I've written so much code in so many languages that it is hard not to hate them all. Why the fuck is it a hash in ruby but a dict in python? How the hell do I get the current unixtime in this language again?!? Who cares, let the AI handle it
zzzeek: coding with an LLM works if the model you are following is: you have the role of architect and/or senior developer, and you have the smartest junior programmer in the world working for you. You watch everything it does, check its conclusions, challenge it, call it out on things it didnt get quite rightit's really extremely similar to working with a junior programmerso in this post, where does this go wrong?> I am not your average developer. I’ve never worked on large teams and I’ve barely started a project from scratch. The internet is filled with code and ideas, most of it freely available for you to fork and change.Because this describes a cut-and-paster, not a software architect. Hence the LLM is a gambling machine for someone like this since they lack the wisdom to really know how to do things.There's of course a huge issue which is that how are we going to get more senior/architect programmers in the pipeline if everyone junior is also doing everything with LLMs now. I can't answer that and this might be the asteroid that wipes out the dinosaurs....but in the meantime, if you DO know how to write from scratch and have some experience managing teams of programmers, the LLMs are super useful.
hirako2000: I doubt gambling is in nature. Investments based on reason pay off. Evolution shapes for sensical moves.Humans invented gambling as a rigged game that mimics what's in nature, perversed for profit.
glial: Broadly speaking, gambling is just making decisions without knowing the future. It's everywhere.
underlipton: As a human, you generally have the opportunity make decent headway in understanding the other humans that you're working with and adjusting your instructions to better anticipate the outputs that they'll return to you. This is almost impossible with AI because of a combination of several factors:>You are not an AI and do not know how an AI "thinks".>Even if you come to be able to anticipate an AI's output, you will be undermined by the constant and uncontrollable update schedule imposed on you by AI platforms. Humans only make drastic changes like this under uncommon circumstances, like when they're going through large changes in their life, not as a matter of course.>However, without this update schedule, problems that were once intractable will likely stay so forever. Humans, on the other hand, can grow without becoming completely unpredictable.It's a Catch-22. AI is way closer to gambling.
strangattractor: One size never fits all. I am old enough to remember what a game changer Spreadsheets (VisiCalc) where. They made the personal computer into a SwissArmy knife for many people that could not justify investing large sums of money into software to solve a niche problem. Until that time PCs simply were not a big thing.I believe AI will do something similar for programming. The level of complexity in modern apps is high and requires the use of many technologies that most of us cannot remotely claim to be expert in. Getting an idea and getting a prototype will definitely be easier. Production Code is another beast. Dealing with legacy systems etc will still require experts at least for the near future IMHO.
MeetingsBrowser: You (in theory) have more control over the quality of the team you are managing, than the quality of the models you are using.And the quality of code models puts out is, in general, well below the average output of a professional developer.It is however much faster, which makes the gambling loop feel better. Buying and holding a stock for a few months doesn't feel the same as playing a slot machine.
PaulHoule: I think somebody like Nate Silver might say “everything is gambling” if you really pressed them.A big theme of software development for me has been finishing things other people couldn’t finish and the key to that is “control variance and the mean will take care of itself”Alternately the junior dev thinks he has a mean of 5 min but the variance is really 5 weeks. The senior dev has mean of 5 hours and a variance of 5 hours.
samschooler: I'm not saying I have a system. I'm saying there are levels to this stuff. It's not a binary "gambling" or "not gambling".
mkehrt: None of my side projects are things where I want the output. They're all things where I want to write the code myself so I understand it better. AI is antithetical to this.
pdntspa: All of my side projects scratch an itch, so I do want the output. There are not enough hours in the day for me to make all the things I want to make. Code is just the vessel, and one I am happy to outsource if I can maintain a high standard of work. It's a blessing to finally find a workflow that makes me feel like I have a shot at building most of the things I want to.
bigstrat2003: > Because the programming is and was always a means to an end.No. Programming is a specific act (writing code), and that act is also a means to an end. But getting to the goal does not mean you did programming. Saying "I'm good at programming" when you are just using LLMs to generate code for you is like saying "I'm good at driving" when you only ever take an Uber and don't ever drive yourself. It's complete nonsense. If you aren't programming (as the OP clearly said he isn't), then you can't be good at programming because you aren't doing it.
throw4847285: There are two major mistakes here.The first is equating human and LLM intelligence. Note that I am not saying that humans are smarter than LLMs. But I do believe that LLMs represent an alien intelligence with a linguistic layer that obscures the differences. The thought processes are very different. At top AI firms, they have the equivalent of Asimov's Susan Calvin trying to understand how these programs think, because it does not resemble human cognition despite the similar outputs.The second and more important is the feedback loop. What makes gambling gambling is you can smash that lever over and over again and immediately learn if you lost or got a jackpot. The slowness and imprecision of human communication creates a totally different dynamic.To reiterate, I am not saying interns are superior to LLMs. I'm just saying they are fundamentally different.And, if we're being honest, the way people talk about interns is weirdly dehumanizing, and the fact that they are always trotted out in these AI debates is depressing.
SkyPuncher: Only if your AI coding approach is the slot machine approach.I've ended up with a process that produces very, very high quality outputs. Often needing little to no correct from me.I think of it like an Age of Empires map. If you go into battle surrounded by undiscovered parts of the map, you're in for a rude surprise. Winning a battle means having clarity on both the battle itself and risks next to the battle.
runarberg: I don‘t think so. A project manager can give feedback, train their staff, etc. An AI coding model is all you get, and you have to wait until your provider trains a new model before you might see an improvement.
bluefirebrand: Drawing parallels between AI and interns just shows you're a misanthropeYou should value assigning tasks to human interns more than AI because they are human
QuantumGood: Framing anything with a common blanket concepts usually fails to apply the same framing to related areas. A lot of things include some gambling, you need to compare how it was before was 'gambling', and how 'not using AI' is also 'gambling', etc.As @m00x points out "coding is gambling on slot machines, managing developers is betting on race horses."
bazmattaz: Dam this is so accurate. As a project manager turned product manager this is so true. You need to estimate a project based on the “pedigree” of your engineers
lokimoon: h1b coding is ignorance.
simonw: I don't know that the "humans learn, LLMs don't" argument holds any more with coding agents.Coding agents look at existing text in the codebase before they act. If they previously used a pattern you dislike and you tell them how to do differently, the next time they run they'll see the new pattern and are much more likely to follow that example.There are fancier ways of having them "learn" - self-updating CLAUDE.md files, taking notes in a notes/ folder etc - but just the code that they write (and can later read in future sessions) feels close-enough to "learning" to me that I don't think it makes sense to say they don't learn any more.
bigstrat2003: It is a matter of fact that LLMs cannot learn. Whether it is dressed up in slightly different packaging to trick you into thinking it learns does not make any difference to that fact.
krupan: Good sir, have you heard the Good Word of the Waterfall development process? It sounds like that's what you are describing
NewsaHackO: I guess I agree with you, but I think the GP may have mispoke and meant he loves building software. It's sort of like the difference between knitting and making clothes. The GP likely loves making clothes on an abstract basis and realized that he won't have to knit anymore to do so. And he really never liked knitting in the first place, as it was just a means to an end.
krupan: I really hope that was your creativity and not AI
bigstrat2003: It's not cheap or good, it's just fast.
7777332215: The problem with AI coding is that you no longer own the foundational tools.
krupan: I ssk an AI to play hangman with me and looked at it's reasoning. It didn't just pick a secret word and play a straightforward game of hangman. It continually adjusted the secret word based on the letters I guessed, providing me the "perfect" game of hangman. Not too many of my guesses were "right" and not too many "wrong" and I after a little struggle and almost losing, I won in the end.It wasn't a real game of hangman, it was flat out manipulation, engagement farming. Do you think it's possible that AI does that in any other situations?
bigstrat2003: Poker has elements of both luck and skill. The luck element + wagering money is what makes it gambling.
dolebirchwood: I have three side projects that revolve around taking public access data from shitty, barely usable local government websites, and then using that data to build more intuitive and useful UIs around them. They're portfolio pieces, but also a public service. I already know how to build all of these systems manually, but I have better things to do. So, hell yeah I'm just going to prompt my way to output. If the code works, I don't care how it was written, and neither do the members of my community who use my free sites.
FL4TLiN3: In my corner of the world, average software developers at Tokyo companies, not that many people are actually using Claude Code for their day-to-day work yet. Their employers have rolled it out and actively encourage adoption, but nobody wants to change how they work.This probably won't surprise anyone familiar with Japanese corporate culture: external pressure to boost productivity just doesn't land the same way here. People nod, and then keep doing what they've always done.It's a strange scene to witness, but honestly, I'm grateful for it. I've also been watching plenty of developers elsewhere get their spirits genuinely crushed by coding agents, burning out chasing the slot machine the author describes. So for now, I'm thankful I still get to see this pastoral little landscape where people just... write their own code.
MattGaiser: Different definitions of programming.OP defines it as getting the machine to do as he wants.You define it as the actual act of writing the detailed instructions.
bluefirebrand: It is very difficult to get the machine to do what you want without the detailed instructionsIf you have an LLM generate the instructions, then the LLM is programming, you're just a "prompter" or something. Not a programmer
rvz: > AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results.Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot; making such an entity completely unsuitable for high risk situations.This typical AI booster comparison has got to stop.
tossandthrow: Love that you needed to make it clear that it is humans that can explain themselves..Employees can only be held accountable with severe malice.There is a good chance that the person actually responsible (eg. The ceo or someone delegated to be responsible) will soon prefer to have AIs do the work as their quality can be quantified.
cko: What is it with you guys and stallions?
Retr0id: On a long enough timeframe, the luck averages out.
orsorna: Well for one, programming actually sucks. Punching cards sucks. Copywriting sucks. Why? Well, implementation for the sake of implementation is nothing more than self-gratifying, and sole focus on it is an academic pursuit. The classic debate of which programming language is better is an argument of the best way to translate human ideas of logic into something that works. Sure programming is fun but I don't want to do it. What I do want to do is transform data or information into other kinds of information, and computing is a very, very convenient platform to do so, and programming allows manipulation of a substrate to perform such transformations.I agree with OP because the journey itself rarely helps you focus on system architecture, deliverable products and how your downstream consumers use your product. And not just product in the commercial sense, but FOSS stuff or shareware I slap together because I want to share a solution to a problem with other people.The gambling fallacy is tiresome as someone who, at least I believe, can question the bullshit models try to do sometimes. It is very much gambling for CEOs, idea men who do not have a technical floor to question model outputs.If LLMs were /slow/ at getting a working product together combined with my human judgement, I wouldn't use them.So, when I encounter someone who doesn't pin value into building something that performs useful work, only the actual journey of it, regardless of usefulness of said work, I take them as seriously as an old man playing with hobby trains. Not to disparage hobby trains, because model trains are awesome, but they are hubris.
munk-a: > Well for one, programming actually sucks. Punching cards sucks. Copywriting sucks.There's a significant difference between past software advancements and this one. When we previously reduced the manual work when developing software it was empowering the language we were defining our logic within so that each statement from a developer covered more conceptual ground and fewer statements were required to solve our problems. This meant that software was composed of fewer and more significant statements that individually carried more weight.The LLM revolution has actually increased code bloat at the level humans are (probably, get to that in a moment) meant to interact with it. It is harder to comprehend code written today than code written in 2019 and that's an extremely dangerous direction to move in. To that earlier marker - it may be that we're thinking about code wrong now and that software, as we're meant to read it, exists at the prompt level. Maybe we shouldn't read or test the actual output but instead read and test the prompts used to generate that output - that'd be more in line with previous software advancements and it would present an astounding leap forward in clarity. My concern with that line of thinking is that LLMs (at least the ones we're using right now for software dev) are intentionally non-deterministic so a prompt evaluated multiple times won't resolve to the same output. If we pushed in this direction for deterministic prompt evaluation then I think we could really achieve a new safe level of programming - but that doesn't seem to be anyone's goal - and if we don't push in that direction then prompts are a way to efficiently generate large amounts of unmaintained, mysterious and untested software that won't cause problems immediately... but absolutely does cause problems in a year or two when we need to revise the logic.
Terr_: I'd emphasize that prompting LLMs to generate code isn't just metaphorical gambling in the sense of "taking a risk", the scary part is the more-literal gambling via addictive behaviors, and how it affects the way the user interacts with the machine and interacts with the world.Heck, this style of gambling-interaction also offers a parasocial relationship at the same time! Plopping tokens into a slot-machine which also projects a holographic "friend" with "emotional support" would fit perfectly in any cyberpunk dystopia.
interestpiqued: I think AI literally makes even being wrong feel like getting something done. And that is the addictive part for people.
yoyohello13: I think the addiction angle seems to make AI coding more similar to gambling. Some people seem to be disturbingly addicted to agentic coding. Much more so than traditional programming. To the point of doing destructive things like waking up in the middle of the night to check agents. Or giving an agent access to their bank account.
deadbabe: I know at least one case where the obsession with agents ruined a marriage.
Obscurity4340: Would you mind sharing some of your findings?