Discussion
The Future of Everything is Lies, I Guess: Safety
jazzpush2: Every one of these posts is immediately pushed to the front page, this one within 4 minutes.
acdha: That’s unsurprising given the author’s long history in the tech community. A ton of people see that domain and upvote.
macintux: Previous discussions from earlier posts on the topic:* https://news.ycombinator.com/item?id=47703528* https://news.ycombinator.com/item?id=47730981
Cynddl: > "Unavailable Due to the UK Online Safety Act"Anyone outside the UK can share what this is about?
jazzpush2: The Future of Everything is Lies, I Guess: Safety Software LLM The Future of Everything is Lies I Guess 2026-04-13 New machine learning systems endanger our psychological and physical safety. The idea that ML companies will ensure “AI” is broadly aligned with human interests is naïve: allowing the production of “friendly” models has necessarily enabled the production of “evil” ones. Even “friendly” LLMs are security nightmares. The “lethal trifecta” is in fact a unifecta: LLMs simply cannot safely be given the power to fuck things up. LLMs change the cost balance for malicious attackers, enabling new scales of sophisticated, targeted security attacks, fraud, and harassment. Models can produce text and imagery that is difficult for humans to bear; I expect an increased burden to fall on moderators. Semi-autonomous weapons are already here, and their capabilities will only expand.Alignment is a Joke Well-meaning people are trying very hard to ensure LLMs are friendly to humans. This undertaking is called alignment. I don’t think it’s going to work.First, ML models are a giant pile of linear algebra. Unlike human brains, which are biologically predisposed to acquire prosocial behavior, there is nothing intrinsic in the mathematics or hardware that ensures models are nice. Instead, alignment is purely a product of the corpus and training process: OpenAI has enormous teams of people who spend time talking to LLMs, evaluating what they say, and adjusting weights to make them nice. They also build secondary LLMs which double-check that the core LLM is not telling people how to build pipe bombs. Both of these things are optional and expensive. All it takes to get an unaligned model is for an unscrupulous entity to train one and not do that work—or to do it poorly.I see four moats that could prevent this from happening.First, training and inference hardware could be difficult to access. This clearly won’t last. The entire tech industry is gearing up to produce ML hardware and building datacenters at an incredible clip. Microsoft, Oracle, and Amazon are tripping over themselves to rent training clusters to anyone who asks, and economies of scale are rapidly lowering costs.Second, the mathematics and software that go into the training and inference process could be kept secret. The math is all published, so that’s not going to stop anyone. The software generally remains secret sauce, but I don’t think that will hold for long. There are a lot of people working at frontier labs; those people will move to other jobs and their expertise will gradually become common knowledge. I would be shocked if state actors were not trying to exfiltrate data from OpenAI et al. like Saudi Arabia did to Twitter, or China has been doing to a good chunk of the US tech industry for the last twenty years.Third, training corpuses could be difficult to acquire. This cat has never seen the inside of a bag. Meta trained their LLM by torrenting pirated books and scraping the Internet. Both of these things are easy to do. There are whole companies which offer web scraping as a service; they spread requests across vast arrays of residential proxies to make it difficult to identify and block.Fourth, there’s the small armies of contractors who do the work of judging LLM responses during the reinforcement learning process; as the quip goes, “AI” stands for African Intelligence. This takes money to do yourself, but it is possible to piggyback off the work of others by training your model off another model’s outputs. OpenAI thinks Deepseek did exactly that.In short, the ML industry is creating the conditions under which anyone with sufficient funds can train an unaligned model. Rather than raise the bar against malicious AI, ML companies have lowered it.To make matters worse, the current efforts at alignment don’t seem to be working all that well. LLMs are complex chaotic systems, and we don’t really understand how they work or how to make them safe. Even after shoveling piles of money and gobstoppingly smart engineers at the problem for years, supposedly aligned LLMs keep sexting kids, obliteration attacks can convince models to generate images of violence, and anyone can go and download “uncensored” versions of models. Of course alignment prevents many terrible things from happening, but models are run many times, so there are many chances for the safeguards to fail. Alignment which prevents 99% of hate speech still generates an awful lot of hate speech. The LLM only has to give usable instructions for making a bioweapon once.We should assume that any “friendly” model built will have an equivalently powerful “evil” version in a few years. If you do not want the evil version to exist, you should not build the friendly one! You should definitely not reorient a good chunk of the US economy toward making evil models easier to train. ...
jazzpush2: To be clear, that's not the full article, just the intro (though the whole thing isn't too long)
throwway120385: At scale I think our society is slowly inching closer and closer to building HM.
nine_k: What is HM here?
zackmorris: Hacker Mews
jazzpush2: Sure, but 4 front-page posts from the same url in 4 days surely sits at the tail of the distribution. (I guess they all capitalize on the same 'LLM-is-bad' sentiment).
zdragnar: It's also aphyr, who is incredibly popular. Take one very popular author, have him write a series of posts on the zeitgeist everyone can't help but talk about, and yes, the outcome is that his posts are extremely popular.I still remember his takedown of mongodb's claims with the call me maybe post years and years ago filling me with a good bit of awe.
aphyr: It's been weirdly uneven. Sections 1, 3, and 5 did well on HN; 2, 4, and 6 sank with essentially no trace. The distribution of views is presently:1. Introduction: 33,088 (https://news.ycombinator.com/item?id=47689648)2. Dynamics: 3,659 (https://news.ycombinator.com/item?id=47693678)3. Culture: 5,914 (https://news.ycombinator.com/item?id=47703528)4. Information Ecology: 777 (https://news.ycombinator.com/item?id=47718502)5. Annoyances: 7,020 (https://news.ycombinator.com/item?id=47730981)6. Psychological Hazards: 199 (https://news.ycombinator.com/item?id=47747936)Feedback from early readers was that the work was too large to digest in a single reading, so I split it up into a series of posts. I'm not entirely sure this was the right call; the sections I thought were the most interesting seem to have gotten much less attention than the introductory preliminaries.
simoncion: I'm not sure that HN vote count is a good indicator of interest? HN alerted me to the existence of the intro post. I read the intro, noticed that it was one in an ongoing series, and have been checking your blog for new installments every few days.I suspect that if you'd not broken up the post into a series of smaller ones, the sorts of folks who are unwilling to read the whole thing as you post it section by section would have fed the entire post to an LLM to "summarize".
Imnimo: >Unlike human brains, which are biologically predisposed to acquire prosocial behavior, there is nothing intrinsic in the mathematics or hardware that ensures models are nice.How did brains acquire this predisposition if there is nothing intrinsic in the mathematics or hardware? The answer is "through evolution" which is just an alternative optimization procedure.
pants2: This Veritasium video is excellent, and makes the argument that there is something intrinsic in mathematics (game theory) that encourages prosocial behavior.https://www.youtube.com/watch?v=mScpHTIi-kM
Sardtok: Hennes & Mauritz is a Swedish clothing retailer.On a serious note, I think they meant TN, as in Torment Nexus, but I could be wrong.
Terr_: [delayed]
throwaway27448: Looksmaxxing really has gone mainstream huh
bitwize: Thought it was all the Rust catgirls.
cowpig: > I think it’s likely (at least in the short term) that we all pay the burden of increased fraud: higher credit card fees, higher insurance premiums, a less accurate court system, more dangerous roads, lower wages, and so on.I think the author is brushing against some larger system issues that are already in motion, and that the way AI is being rolled out are exacerbating, as opposed to a root cause of.There's a felony fraudster running the executive branch of the US, and it takes a lot of political resources to get someone elected president.
order-matters: natural selection. cooperation is a dominant strategy in indefinitely repeating games of the prisoners dilemma, for example. We also have to mate and care for our young for a very long time, and while it may be true that individuals can get away with not being nice about this, we have had to be largely nice about it as a whole to get to where we are.while under the umbrella of evolution, if you really want to boil it down to an optimization procedure then at the very least you need to accurately model human emotion, which is wildly inconsistent, and our selection bias for mating. If you can do that, then you might as well go take-over the online dating market
jbreckmckye: Optimists would argue that the answer to bad actors using AI is good actors using AI, but I think another option is de-digitalisation in certain domains.An example is interviewing and jobs. A fully digitised recruitment pipeline - Zoom calls, CVs, GitHub profiles - is too easy to defraud. I remember even before the pandemic, any kind of remote role would attract hundreds of applications of dubious quality.The most likely outcome as I see it, is companies will simply demand in person interviews. Probably only at the final stage, but they will want that in-person verification.Another example is education. Universities teach using AI generated scripts and they grade AI generated essays. Students are under too much financial pressure to pay for slop, and, institutions that are subject to fraud will end up trashing their reputation.So the outcome becomes a separate class of qualification that is endowed on people with some verified quality. If it's not that they pass in-person paper exams, it will be something opaque like social class or personal connections instead.Ultimately I suspect AI will to some extent corrode the value of digital information, just by generally producing distrust. In some ways "slop" is not actually a new problem: we have had people generating spam, scams, and algorithm-bait for many years. But accelerating that could lead to a collapse of trust more generally, which would, ironically, be damaging to those companies that sell LLMs, themselves
dgfl: The issue with most of these articles is that they seem to demonize the technology, and systematically use demeaning language about all of its facets. This one raises a lot of important points about LLMs, but the only real conclusion it seems to make is "LLMs are bad! We should never build them!". This is obviously unrealistic. The cat is out of the bag. And we're not _actually_ talking about nuclear weapons here. This technology is useful, and coding agents are just the first example of it. I can easily see a near future where everyone has a Jarvis-like secretary always available; it's only a cost and harness problem. And since this vision is very clear to most who have spent enough time with the latest agents, millions of people across the globe are trying to work towards this.I do think that safety is important. I'm particularly concerned about vulnerable people and sycophantic behavior. But I think it's better not to be a luddite. I will give a positively biased view because the article already presents a strongly negative stance. Two remarks:> Alignment is a JokeTrue, but for a different reason. Modern LLMs clearly don't have a strong sense of direction or intrinsic goals. That's perfect for what we need to do with them! But when a group of people aligns one to their own interest, they may imprint a stance which other groups may not like (which this article confusingly calls "unaligned model", even though it's perfectly aligned with its creators' intent). People unaligned with your values have always existed and will always exist. This is just another tool they can use. If they're truly against you, they'll develop it whether you want it or not. I guess I'm in the camp of people that have decided that those harmful capabilities are inevitable, as the article directly addresses.> LLMs change the cost balance for malicious attackers, enabling new scales of sophisticated, targeted security attacks, fraud, and harassment. Models can produce text and imagery that is difficult for humans to bear; I expect an increased burden to fall on moderators.What about the new scales of sophisticated defenses that they will enable? And for a simple solution to avoid the produced text and imagery: don't go online so much? We already all sort of agree that social media is bad for society. If we make it completely unusable, I think we will all have to gain for it. If digital stops having any value, perhaps we'll finally go back to valuing local communities and offline hobbies for children. What if this is our wakeup call?
throw4847285: Thanks LLM!
dgfl: lol. I did use a lot of short sentences, that’s my bad. But please read through [1] and compare my text onto it, it may enlighten you on how to actually spot llm writing.[1] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
conquera_ai: Feels like we’re repeating classic distributed systems lessons: assume failure, constrain blast radiusand never trust components that can’t explain themselves reliably
ibrahimhossain: Exactly assuming failure and constraining the blast radius feels like the only reliable path when the models themselves are black boxes. Patch based alignment starts looking fragile pretty quickly
satvikpendem: Ironic.
jagged-chisel: "Alignment"In what world would I ever expect a commercial (or governmental) entity to have precise alignment with me personally, or even with my own business? I argue those relationships are necessarily adversarial, and trusting anyone else to align their "AI" tool to my goals, needs, and/or desires is a recipe for having my livelihood completely reassigned into someone else's wallet.
atleastoptimal: There really are only 3 options that don't involve human destruction:1. AI becomes a highly protected technology, a totalitarian world government retains a monopoly on its powers and enforces use, and offers it to those with preexisting connections: permanent underclass outcome2. Somehow the world agrees to stop building AI and keep tech in many fields at a permanent pre-2026 level: soft butlerian jihad3. Futurama: somehow we get ASI and a magical balance of weirdness and dance of continual disruption keeps apocalypse in check and we accept a constant steady-state transformation without paperclipocalypse
raincole: In other words, only one option.
operatingthetan: Or we keep building AI and no apocalypse happens.
throw4847285: Oh no, I'm sorry to hear that.For the future, try to avoid prevaricating when you actually have a clear sense of what you want to argue. Instead of convincing me that you've weighed both options and found luddism wanting, you just come off as dishonest. If you think stridently, write stridently.
__MatrixMan__: You could expect such a thing in a world where consent was currency, rather than scarcity.
starik36: What specifically is unsafe in this article?
amarant: There's really only one thing we need to do to avoid the apocalypse, and that is to not hand over the launch codes to a LLM.Seems easy enough, I'm actually pretty confident in even the most incompetent of current world leaders in this particular task.