Discussion
The Future of Everything is Lies, I Guess: Work
hoppp: Unavailable Due to the UK Online Safety Act
basilikum: https://web.archive.org/web/20260414151754/https://aphyr.com...
greatpost: Thank you for this aphyr.My one ask is people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life form and oh no they’re going to do something terrible. There are good company executives and shitty ones. You should try to start a company and see if you can be one of the better ones.
raddan: Cue very tiny violin.
Papazsazsa: previously: https://news.ycombinator.com/item?id=47754379
coldtea: >There are good company executives and shitty ones. For the company's bottom line, maybe. For humanity, there are no good ones.
nancyminusone: When companies do something terrible (and they do, all the time) who are you going to blame for it? It's not at all surprising that CEOs have earned the reputation they have.
nothinkjustai: Thank you Aphtr, your writing really is great. Comparing vibe coding to witchcraft is hilarious and also quite apt. I’d love if we collectively decided to shift to those terms. We aren’t “vibe coding” anymore, we’re “conjuring”, with our spell books and incantations.
moussore: Can the mods please remove a lot of these comments? Too many bot replies
micromacrofoot: how can you tell?
bbg2401: The author appears to be under the misaprehnsion that a personal blog with a comment section is impacted by the act.
MarkusQ: Misapprehension? If so, they aren't the only one.https://www.theregister.com/2025/02/06/uk_online_safety_act_...
vegancap: How come this is blocked in the UK? :S
Jtarii: I think he is trying to make some misguided political statement.
simianwords: No you don’t have to review every single line of code produced by AI in fears of security. This is quite exaggerated and I think the author is biased in his own field.
Devasta: Why wouldn't it be?
aphyr: I am, oddly enough, the chief executive officer of two (trivially small) tech companies.
theredleft: cheers. I think you're doing a good job and ruffling some feathers here! Your content has been great.I highly recommend reading Marx. Your content has related Marxist topics like the 'Fetishism of Commodities' (Software as Witchcraft) and the Labor Theory of Value.
kentm: His reasoning doesn't seem like a political statement: https://news.ycombinator.com/item?id=47754379#47757803That seems very practical and well-reasoned to me.
DonaldPShimoda: > people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life formMight I suggest a viewing of the 2025 film "Bugonia"?
evan_a_a: spoilers
mitthrowaway2: Well for one, the commenter literally called themselves "no think just AI".
dlev_pika: I think I’ve seen this article posted every day for the past week or so
barbazoo: > I continue to write all of my words and software by hand, for the reasons I’ve discussed in this piece—but I am not confident I will hold out forever.There it is, an actual em-dash in the wild, written by hand.
buildbot: I love the analogy of AI coding as witchcraft! It’s very accurate to how working with these tools feels - At one point I was forced to invoke a “litany against stubbing” in a loop to make claude code actually implement a renode setup for some firmware. That worked really well.It feels like hexing the technical interview come to real life ;)
Aurornis: Class warfare generalizations have become the safe outlet for internet rage because going after CEOs and billionaires is most “punching up” construction that is generally relatable.An unintended side effect that I’ve noticed is that it normalizes bad behavior of CEOs for those who invest a lot of “CEOs bad” grist (Reddit, Threads, even Hacker News). When someone, usually early career, takes a job with a bad CEO after years of reading “CEOs bad” content online, they can go into a learned helplessness mode because they think the behavior they’re seeing is normal. They don’t believe changing jobs would help because they’ve learned from social media to believe that their CEO’s bad behavior is actually normal.This has becoming a frequent topic when in a rotational mentorship program where I volunteer: Early career folk join some toxic startup and stay because the internet told them all CEOs are like this. We have to shake them free from those ideas and get them to realize that there are good and bad companies out there and they have options.
dlev_pika: “No war but class war” rings as true in 2026 as it did 40 years ago
jerf: The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve. You can find evidence for both. LLMs probably can't get another 10 times better. But then, almost literally at any minute, someone could come up with a new architecture that can be 10 times better with the same or fewer resources. LLMs strike me as still leaving a lot on the table.If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do.If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.
nostrademons: Somewhere around 2005-2007, when people were wondering if the Internet was done, PG was fond of saying "It has decades to run. Social changes take longer than technical changes."I think we're at a similar point with LLMs. The technical stuff is largely "done" - LLMs have closer to 10% than 10x headroom in how much they will technologically improve, we'll find ways to make them more efficient and burn fewer GPU cycles, the cost will come down as more entrants mature.But the social changes are going to be vast. Expect huge amounts of AI slop and propaganda. Expect white-collar unemployment as execs realize that all their expensive employees can be replaced by an LLM, followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off. Expect the Internet as we loved it to disappear, if it hasn't already. Expect new products or networks to arise that are less open and so less vulnerable to the propagation of AI slop. Expect changes in the structure of governments. Mass media was a key element in the formation of the modern nation state, mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit. Probably expect war as people compete to own the discourse.
faangguyindia: We are bottom. It's just a start.We are in era of pre pentium 4 in AI terms.
fnimick: And you have evidence as basis for this very confident statement... where?
faangguyindia: Intuition. It comes from the spiritual awakening and being aware of your consciousness. Only Time will prove what turns out be right.
sophacles: You worship the AI?
echelon: > The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve.Even using the models we have today, we have revolutionized VFX, video production, and graphics design.Similarly, many senior software engineers are reporting 2-10x productivity increases.These tools are some of the most useful tools of my career. I don't even think the general consumer public needs "AI" in their products. If we just create control surfaces for experts to leverage and harness the speed up and shape and control the outcomes, we're going to be in a very good spot.These alone will have ripple effects throughout the economy and innovation. We've barely begun to tap into the benefits we have already.We don't even need new models.
monooso: Yes, misapprehension.According to the Ofcom regulation checker [1] (linked to by The Register article), the Online Safety Act does not apply to this content.Here's the most pertinent section (emphasis mine):> Your online service will be exempt if... Users can only interact with content generated by your business/the provider of the online service. Such interactions include: comments, likes/dislikes, ratings/reviews of your content including using emojis or symbols. For example, this exemption would cover online services where the only content users can upload or share is comments on media articles you have published...[1]: https://ofcomlive.my.salesforce-sites.com/formentry/Regulati...
john_strinlai: is this legal advice you are offering, as someone practicing law in the uk? because you are all over this thread stating your opinion very confidently.(conveniently, there is no risk to yourself if you happen to be wrong or misinformed.)
monooso: No, I'm not offering legal advice, and neither am I stating an opinion. I'm simply quoting Ofcom, the regulatory body responsible for overseeing this law.
itissid: For any one who has not read the cockpit recording of air-france-447 I would encourage them to[1]. It is simply jaw dropping study in how things go wrong so fast — a risk with AI we have barely begun to acknowledge let alone regulate as a community.[1](https://tailstrike.com/database/01-june-2009-air-france-447/)
tencentshill: >MyAnd who are you? An account created for one post? There is a pattern of green account with usernames vaguely related to the subject matter of their comments.
john_strinlai: >And who are you?are you expecting them to reply with their full government name or something?if you think they are a bot, flag it. if you dont like the comment, downvote it.
tossandthrow: You are very strong on the "slop" bias. Why?In managing a large to enterprise sized code base, I experience the opposite. I can guarantee a much more homogenous quality of the code base.It is the opposite of slop I am seeing. And that at a lower cost.Today,I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.
chaps: Which company do you work at so we can avoid your migrated endpoints?
tossandthrow: Wtf. You don't even know what the migration was about?
hn_throwaway_99: > Somewhere around 2005-2007, when people were wondering if the Internet was doneLiterally who wondered that? Drives me nuts when people start off an argument with an obvious strawman. I remember the time period of 2005-2007 very well, and I don't remember a single person, at least in tech, thinking the Internet was done. I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to do. E.g. we didn't necessarily know what form mobile would take, but it was obvious to most folks that the tech was extremely immature and that it would have a huge impact on the Internet as it progressed. That's just one example - social media was still in its nascent stages then so it was obvious there would be a ton of work around that as well.
chaps: I mean, I'm always down for learning something new. But I hope what I learn includes the name of the company I'd like to avoid.
tossandthrow: Your tone is in conflict with the statement that you are curious.
chaps: It's because you're deflecting. :)
apsurd: One point: yes, you're speaking from the power position. God-mode over a fleet of minions has always been an engineer's wet-dream. That's not even bad per-say. It's the collateral damage down stream that's at issue. Maybe you don't see any damage, but that's largely the point. Is it really up to you to say?
tossandthrow: What is the collateral damage? In ensuring that a bunch of endpoints use the same structure using LLMs?
apsurd: Let's not debate that it's possible to make very large very safe changes. It is possible that you did that.This is about "slop bias". I'd wager that empowering everyone, especially power-positions to ship 50x more code will produce more code that is slop than not. You strongly oppose this because it's possible for you to update an API?I'm stuck on the power-position thing because I'm living it. I'm pro-AI but there are AI-transformation waves coming in and mandating top down and from a green-field position it all makes sense and everyone is rocking. Maintenance of all kinds is separate and the leaders and implementors don't pay this cost. Maybe AI will address everything at every level. But those imposing this world assume that to be true, while it's the line-engineers and sales and customer service reps that will bear the reality.
vharuck: I agree with the gist of your points, but not much with these two:>followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off.These will be rare boutique affairs. Based on how mass production and cheap shipping played out, most people value price over quality. The economy will rearrange itself around those savings, making boutique products and services expensive.>mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit.We have this today. And that's not a "same as it ever was" dismissal. Today, there are a lot of terminally online people posting the equivalent of propaganda (and actual propaganda). Social media pushes hot takes in audiences' faces, a portion of them reshare it, and it spreads exponentially. The only limitation to propaganda today is how much time the audience spends staring at the "correct" content provider.
intended: Does Aphyr give himself a limit of 6 semicolons ? If their editor returns, will this count drop to 0?(And before anyone brings pitch forks out, this is what they wrote in a previous article:> “Cool it already with the semicolons, Kyle.” No. I cut my teeth on Samuel Johnson and you can pry the chandelierious intricacy of nested lists from my phthisic, mouldering hands. I have a professional editor, and she is not here right now, and I am taking this opportunity to revel in unhinged grammatical squalor.My life was made poorer for knowing that semicolons are apparently a sin, but richer for the rebellion.
nostrademons: If you were in tech in 2005-2007 you were part of a small minority of the general population. It often didn't feel like a small minority because, well, you knew all those other people on the Internet, but that's a pretty strong selection bias.There is, of course, the Paul Krugman quote from 1998 that by 2005 the Internet would be no more important than a fax machine. [1]Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]I remember, being at Google in ~2011, we used to laugh at the Wall Street analysts because they would focus on CPC numbers to forecast a valuation, which is important only if the number of clicks is remaining constant. We knew, of course that total Internet usage was still growing quite rapidly and that queries had increased by roughly 4x over the 2009-2013 timeframe.And a lot of people will say "If you're so smart, why aren't you rich?", and I'll point out that many people who assumed the Internet had lots of room to grow in 2005-2007 did end up very rich. Google stock has increased roughly 20x since 2007 (and 40x from its 2009 lows). Meta is now worth $1.6T, a 100x increase over the $15B valuation that everyone thought was insane in 2007. Amazon is also up about 100x. It would not be possible to take the other side of the trade and make these kind of profits if the majority of people did not think the Internet was largely over.[1] https://www.snopes.com/fact-check/paul-krugman-internets-eff...[2] https://www.wired.com/2007/10/facebook-future/
drivebyhooting: In the case of UBI, how would we differentiate between a previously highly paid professional (SWE, lawyer, author) and a pauper (janitor, car washer, unemployed).It’s only fair that they would receive the same amount. But then how can the former category continue to fulfill their obligations?
stevenally: "But then how can the former category continue to fulfill their obligations?".They can't. Just like the steel workers who lost their jobs in the 1970's.
intended: So I’ve been conducting an unscientific series of interviews to understand what is actually going on with AI and productivity. I’ve spoken to everyone from coders, journalists, analysts, real estate people, finance folk, policy folk, media people.The voices range from singularity, economic revolution, to 2x productivity, 15% improvements, wont matter if it disappeared, to negative productivity, worse recall and development.The most interesting patterns is from the singular person I can arguably believe has 2x or more productivity.The person is a journalist, they write copiously, and are also highly tech friendly and have dumped serious time in building familiarity with their tools.However, it wasn’t familiarity tool familiarity that mattered. They use it to take notes on everything they are thinking about, and then shape that output into articles and thought processes.From what I can tell, its more like having the skill to create a bunch of rough sketches, and then tell at a glance if the composition is balanced, or if the raw material can go somewhere.If it isn’t balanced, you know fast enough it wont work, and you spin up a new version.There are several ironies here. The best case economic scenarios for AI are essentially Automation with another name, while the highest productivity gain comes from experts who know how to use it.LLMs end up affecting skill acquisition, the same way using forklifts to lift weights would hamper muscle formation.The Ironies of Automation came out in 1983, and Tech is hellbent on unleashing a fabrication prone general purpose automation tech.
peterbell_nyc: Seeing plenty of this. The quality of agentic code is a function of the quantity and quality of adversarial quality gates. I have seen no proof that an agentic system is incapable of delivering code that is as functional, performant and maintainable as code from a great team of developers, and enough anecdotes in the other direction to suggest that AI "slop" is going to be a problem that teams with great harnesses will be solving fairly soon if they haven't already.
tossandthrow: Deflecting from what? Telling the company name so you can avoid it due to your incredibly curious nature?
MagicMoonlight: We aren’t anywhere near AGI. They’ve consumed the entirety of human knowledge and poisoned the well, and it still can’t help but tell you to walk to the car wash.A peasant villager was sentient without a single book, film or song. You don’t need this much data to be sentient. They’re using a stupid method, and a better one will be discovered some day.
pixl97: Sentience isn't intelligence.
fny: Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity. We easily have enough advancement to change the economy dramatically as is. The adoption isn't there yet.
tossandthrow: > Maybe AI will address everything at every level.I think this is the idea you need to entertain / ponder more on.I largely agree with you, what I don't agree with is the weighting about the individual elements.My point was that I could do a 30 minutes cleanup in order to streamline hundreds of endpoints. Without AI I would not have been able to justify this migration due to business reasons.We get to move faster, also because we can shorten deprication tails and generally keep code bases more fit more easily.In particular, we have dropped the external backoffice tool, so we have a single mono repo.An Ai does tasks all the way from the infrastructure (setting policies to resources) and all the way to the frontends.Equally, if resources are not addressed in our codebase, we know at a 100% it is not in use, and can be cleaned up.Unused code audits are being done on a weekly schedule. Like our sec audits, robustness audits, etc.
miyoji: I think it's true that there are more bad CEOs than good CEOs. I've seen good CEOs turn into bad CEOs, but I've never seen a bad CEO turn into a good CEO. I assume it does happen, but there's a strong cultural pressure (and many hundreds of millions of dollars) pushing bad CEO behavior and very little other than personal ethics pushing good CEO behavior, and when the incentives look like that, swimming upstream is hard.> We have to shake them free from those ideas and get them to realize that there are good and bad companies out there and they have options.Not everyone does have options, though. This is why instead of telling people to just avoid the bad CEOs, workers should unionize and collectively bargain against the bad CEOs. I'm sure I'll be seeing a lot of class warfare generalizations about "unions bad" in response to this suggestion.
apsurd: I take your point but then it makes me think is there no more value in diversity?[Philosophy disclaimer] So in a code-base diversity is probably a bad idea, ok that makes sense. But in an agentic world, if everything is run through the perfect Harness then humans are intentionally just triggers? Not even that, like what are humans even needed for? Everything can be orchestrated. I'm not against this world, this is an ideal outcome for many and it's not my place to say whether it's inevitable.What I'm conflicted on is does it even "work" in terms of outcomes. Like have we lost the plot? Why have any humans at all. 1 person billion dollar company incoming. Software aside, is the premise even valid? 1 person's inputs multiplied by N thousand agents -> ??? -> profit
Quarrelsome: Btw why am i as a brit, blocked via my traditional routing because of the OSA? What possible features do you have on that site to make that relevant?
jcalvinowens: Anybody who is interested should read the full report: https://www.faa.gov/sites/faa.gov/files/AirFrance447_BEA.pdf
jerf: Even after I explained the exact usage I was invoking, the attractive nuisance of all the science fiction that has gotten attached to the term still prevented you and Quarrelsome from reading my post as written.I really wish the term hadn't been mangled so much. Though the originator of the term bears a non-trivial amount of the responsibility for it, having written some rather good science fiction on the topic himself. The original meaning from the paper is quite useful and nothing has stepped up to replace it.All the singularity means as I explicitly used it here is you entirely lose the ability to predict the future. It is relative to who is using it... we are all well past the Caveman Singularity, where no (metaphorical) caveman could possibly predict anything about our world. If we stabilize where we are now I feel like I have at least a grasp on the next ten years. If we continue at this pace I don't. That doesn't mean I believe AI will inevitably do this or that... it means I can't predict anymore, which is really the exact opposite. AI doesn't have to get to "superintelligence" to wreck up predictions.
rambambram: The comparison with sociopaths is a good one. On the surface all human behavior, but if you lift the veil even a little bit it becomes clear there's no substance, no conscience, etc.Read up on Cluster B personality disorders (borderline, narcissism, sociopaths/psychopaths) and you see the similarities. Love bombing, gaslighting, a shared fantasy, etc. It's very interesting and scary at the same time.
wslh: I wonder if vibe coding is partly what happens when software engineering fails to converge on reusable abstractions. Instead, we got fragmented tools and endless reinvention of the same components, and LLMs arrived as an ad hoc abstraction layer on top.
Terr_: [delayed]
keeda: I've said it before, but it would be a mistake to just focus on the models, and ignore everything else that is changing in the ecosystem -- tools, harnesses, agents, skills, availability of compute, etc. -- things are changing very quickly overall.The thing that is changing most rapidly, however, is the understanding of how to harness this insanely powerful, versatile, and unpredictable new technology.Like, those who experimented deeply with LLMs could tell that even if all model development completely froze in 2024, humanity had decades worth of unrealized applications and optimizations to explore. Even with AI recursively accelerating this process of exploration. As a trivial example, way back in 2023 anyone who got broken code from ChatGPT, fed it the error message, and got back working code, knew agents were going to wreck things up very quickly. It wasn't clear that this would look like MD files, Claude Code, skills, GasTown, and YOLO vibe-coding, but those were "mere implementation details."I'm half-convinced an ulterior goal of these AI companies (other than the lack of a better business model) to give away so many cheap tokens is to encourage experimentation and overcome this "capability overhang."Given all this, it's very hard to judge where we are on the curve, because there isn't just one curve, there are actually multiple inter-playing curves.
tim333: >Why is everyone so damn obsessed with the singularity?I don't think most are - it tends to regarded as rather cranky stuff, and a lot of people who use the term are a bit cranky.Even so AI maybe overtaking human intelligence is an interesting thing in human history.
afthonos: An interesting thing in AI history. For human history, it’s epochal.
bluecheese452: Ironically the post saying it is not slop sounds exactly like ai slop.
tossandthrow: Too. Many spelling errors for that to be slop...
enraged_camel: >> Imagine a co-worker who generated reams of code with security hazards, forcing you to review every line with a fine-toothed comb. One who enthusiastically agreed with your suggestions, then did the exact opposite. A colleague who sabotaged your work, deleted your home directory, and then issued a detailed, polite apology for it. One who promised over and over again that they had delivered key objectives when they had, in fact, done nothing useful. An intern who cheerfully agreed to run the tests before committing, then kept committing failing garbage anyway. A senior engineer who quietly deleted the test suite, then happily reported that all tests passed.>> You would fire these people, right?Okay, now imagine a different colleague. One who writes a solid first draft of any boilerplate task in seconds, freeing you to focus on architecture instead of plumbing. A dev who never gets defensive when you rewrite their code, never pushes back out of ego, and never says "that's not my job." A pair programmer who's available at 3 AM on a Sunday when prod is down and you need to think out loud. One who remembers every API you've forgotten, every flag in every CLI tool, every syntax quirk in a language you use twice a year, or even every day.You'd want that person on your team, right? In fact, you would probably give them a promotion.Here's the thing: the original argument describes real failure modes, but then commits a subtle sleight of hand. It personifies the tool as a colleague with agency, then condemns it for lacking the judgment that agency implies. But you don't fire a table saw because it doesn't know when to stop cutting, right? You learn where to put your hands.Every flaw in that list is, at the end of the day, a flaw in the workflow, not the tool. Code with security hazards? That's what reviews are for. And AI-generated code gets reviewed at far higher rates than the human code people have been quietly rubber-stamping for decades. Commits failing tests? Then your CI pipeline should be the gate, not a promise. Deleted your home directory? Then it shouldn't have had the permissions to do that in the first place. In fact, the whole "deleted my home directory" shit is the same thing as "our intern deleted the prod database". We all know that the response to the latter is "why did they have permission to prod in the first place??" AI is the same way, but for some god damn reason people apply totally different standards to it.
simoncion: > But you don't fire a table saw because it doesn't know when to stop cutting, right?If I purchased a table saw and that table saw irregularly and unpredictably jumped past its safeties -as we've plenty of evidence that LLMs [0] do-, then I would [1] immediately stop using that saw, return it for a refund, alert the store that they're selling wildly unsafe equipment, and the relevant regulators that a manufacturer is producing and selling wildly unsafe equipment.[0] ...whether "agentic" or not...[1] ...after discovering that yes, this is not a defective unit, but this model of saw working as designed...
enraged_camel: But that's the thing: the table saw has safeties. Someone put them there. Without those safeties, it, too, would jump unpredictably.Scary scenarios like AIs deleting home directories are the result of the developers explicitly bypassing those safeties.
simoncion: > But that's the thing: the table saw has safeties. Someone put them there.You noticed that I mentioned that this hypothetical table saw has poorly-designed, entirely inadequate safeties? Things like Opus treating the questions it asks the user as commands that it should execute [0] is definitely [1] a sign of solid, well-designed safety mechanisms.You might choose to retort "Well, that's because the user isn't running the tool in the mode that makes it wait for confirmation before doing anything of consequence!". In reply, I would point in the general direction of the half-squillion studies indicating that a system whose safety requires an operator to remain vigilant when presented with a large volume of irregularly-presented decision points (nearly all of which can be safely answered with a "Yes, do it.") does not make for a safe system. [2] It -in fact- makes for a system that's designed [3] to be unsafe.You might also choose to retort "That's never happened to me, or anyone that I know about.". Intermittent failures of built-in safeties that happen under unpredictable circumstances are far, far worse than predictable failures that happen under known ones. I hope you understand why.[0] <https://old.reddit.com/r/ClaudeCode/comments/1sex28q/opus_46...>[1] ...not...[2] I would also -somewhat wryly- note that "An AI Agent that does all of your scutwork, but whose every decision you have to carefully scrutinize, because it will irregularly plan to do something irreversibly destructive to something you care about." is not at all the picture that "AI" boosters paint of these tools.[3] ...whether intentionally or not...