Discussion
Sam Altman
LunaSea: Unserious answer about a very serious event.I don't believe a word of Sam's "I believe" section.
mixtureoftakes: unpopular opinion but i think it's written quite well
bedroom_jabroni: Did Claude Mythos escape containment?
loloquwowndueo: “I couldn’t find vulnerabilities in Sam’s devices so I contracted a rando over the internet to Molotov his house” sounds fairly implausible :)
verdverm: This is actually happening without the Aihttps://www.lemonde.fr/en/france/article/2026/04/07/the-stra...
zb3: So there's one photo. Of one family. Now what about millions of photos of all the other families possibly affected by him? That doesn't have power?It's like "hey you can say mean things about me but don't attack my family while I attack yours". Not that this is directed at him personally, but it's just this mindset of wealthy people..
angoragoats: To be clear, I don’t want anyone’s house to get firebombed by any means. But the “I’m just a humble guy making mistakes and trying the best I can” attitude of this article strikes me as extremely inauthentic based on everything I know about the guy.
coldtea: "Our product can destroy humanity, and it's not some crank telling you this, it's the company and CEO making it themselves, but we'll continue to make it anyway, so suck it up" but also "I'm just a humble guy, why can't we all live in peace?"
surround: > There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.For context I think he's referring to this New Yorker article:"Sam Altman May Control Our Future—Can He Be Trusted?"https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...https://news.ycombinator.com/item?id=47659135
krapp: No more bombs? OK. Let's break out the guillotines, then.
hyeonwho5: Firebombing homes is completely uncivilized, but I'm not going to believe a single public word from Altman about anything. He's a lying sociopath and will say whatever gets himself ahead.
pesus: > The world deserves huge amounts of AI and we must figure out how to make it happen.> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?
verdverm: The Epstein regime all seem really manic and probably fearing the French bourgeoisie treatment. They tried to get Luigi on "terrorism" charges
psiisim: What a tone deaf response. Sounds like he learned nothing at all from this.
rootusrootus: > They tried to get Luigi on "terrorism" chargesThat's about the least controversial thing I've heard recently. Luigi murdered a guy specifically because he was a health insurance CEO. Not because of something he did in particular, but because of the role he assumed. Terrorizing other CEOs is precisely what he intended to do. It is why there are so many Luigi fans, it is what they want too.
verdverm: Worth noting the legal system did not find it to reach the requirements for terrorism.https://www.pbs.org/newshour/nation/luigi-mangione-due-in-co...My understanding is that it was personal
ben_w: So all these things he's saying are going to leave people scared and afraid, on that we agree. What's the disingenuous part here?Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.But what, specifically, do you see? What am I blind to?* given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams
mattsoldo: It's never OK to physically attack someone like this. Full stop.Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
gnuvince: > Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.
probably_wrong: 10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.[0] https://news.ycombinator.com/item?id=47717587
SpicyLemonZest: [delayed]
raslah: The FOBO here smells.
SOLAR_FIELDS: Ha, I was giving an AI bootcamp to a room full of people and someone asked me my opinion of Altman. I hesitated for a second and replied that I would not trust Altman further than I could throw a rock about anything.If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?
dakolli: Who tf is dumb enough to pay for an AI bootcamp, genuinely curious. If you're selling AI bootcamps, or whoever is, they are just as much a scam artist as Sam.
moralestapia: Who tf is dumb enough to not do it, though?If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?
tyre: The post itself is authentic in that it's a set narrative for this moment. When you see the world as Sam does, this event is a specific opportunity to humanize him. Through that lens, the humility is both performative (it is!) and necessary. To be truthful would be inauthentic.The sympathy is meant to give time and slack to accumulate power. One of the largest impediments to OpenAI right now is that people don't trust them, more and more people don't trust Sam, and their commitments are starting to not pan out (e.g. cancelling of Stargate UK, dropped product lines, etc.)People should not read a post like this as, "how does this make me feel? how might I respond in his situation?", but rather, as he does, "how can I use this?"
zinodaur: Is it okay to profit off of a machine that kills innocent people? Would it be immoral to attack the builder of that machine, if it stopped the operation of the machine?
hungryhobbit: I categorically reject that assertion. Two simple examples: 1) when you see someone assaulting someone else, it's absolutely ok to attack them, and 2) the American revolution!It's like that old joke:A man offers a young woman $1,000,000 to sleep with him for one night.She looks at him, he’s a little older, a few pounds too heavy, but not too bad looking.“For a million dollars? Sure, I’ll sleep with you.”He smiles at her.“How about $50, then?”“How dare you! I’m not a whore!”“Look, lady, we’ve already agreed what you are, now we’re just negotiating the price.”Similarly in this case, you can't make up absolutes and assert the're true, while ignoring that the real world is more complicated. And once you do realize the world is complicated, you realize there aren't absolutes: everyone is a prostitute, terrorist, or whatever other bad label you want to throw at them ... it's just a matter of degree.So no, it's not always wrong to physically attack someone like this. You can debate specifically whether Altman has committed enough violence himself to justify violence against him: that's something two people can reasonably disagree on. But you cant' just say "violence bad" like its some great pearl of wisdom, while ignoring that violence has in fact been good many times throughout history.
smallmancontrov: If only it were reciprocal!When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.
happytoexplain: Historically, was it always so common for powerful or famous people to seem to purposefully garner hatred like he, and others, have been for the past decade? To speak in a petty, self-important, "trolling" manner, to a very broad audience? To embrace traits that are intrinsically negative? Or are we living in a rare time?
adestefan: New England colonists had a habit of ransacking and burning down the houses of government officials throughout the 1760s and during the Revolutionary War. Got bad enough that most did not sleep in their government housing.
amarant: What the hell is up with this thread? It seems half the people here are saying they get molotoved on a weekly basis,Sam is a such and such for not taking it like a man, while the other half appears to mourn the lack of casualties?Wtf is wrong with you people? Get off my lawn and go back to Reddit where you belong!
nslsm: > It's never OK to physically attack someone like this. Full stop.I agree. The French Revolution was really, really wrong.
tempestn: Are you familiar with the details of the French Revolution? Some of the eventual outcomes were indeed positive, but a lot of what actually went on was pretty horrific.
burnte: Agreed. Sam's full of crap and the way we tackle that is with conversations, not violence. He deserves to grow old like anyone else, violence isn't an answer.
Arodex: Everyone else deserves to grow old, too...
stego-tech: Sam. My guy. My dude.You still don't really get it. You cannot possibly get it. Your wealth and status removes you from society and its ills in ways the working class can only imagine. You will never be able to relate to the struggles of the worker ever again because of said wealth and your personal vision around technology as a replacer, rather than companion.Your staunch refusal to heed the critiques of those you harm means that these outcomes were inevitable. In a society where two full-time working adults still cannot afford a home, or children, or healthcare, or education, your insistence upon robbing them of their ability to survive at all is tantamount to a direct threat of violence against them. Your insistence that the pain is necessary, that others must clean up the messes that you and your peers are willfully creating, is the sort of behavior expected from toddlers rather than statesmen.Your words are hollow; your actions speak volumes. You lobby governments to block regulations you claim are so desperately needed. You demand the masses pay your bills while you retain all the rewards, then turn around and say this is for our own good. You plead humanity while actively pursuing its extinction - which, let's be clear, building gluts of wholly unnecessary datacenters running predominantly on fossil fuels in the midst of a climate catastrophe already in progress, is exactly that. You build survival compounds for yourself and your family, expecting society to fall apart by your own designs.To paraphrase SNL UK's Weekend Update segment on Zuckerberg's own survival bunkers: I hope you get as much use out of yours as Adolf Hitler did from his.You can take your pathetic whinging and shove it right back up your billionaire ass. Your technology kills people - actively, via government contracts and weapons systems; and passively, via denial of healthcare claims in insurers, elimination of jobs necessary for survival. You are not the victim here, but the perpetrator pleading for mercy from those you harm every single day.OpenAI is not building a better future. It is destroying what little is left of it for your personal pleasure.Fuck you, Sam.
cuuupid: Comments here are so concerning, why are people so giddy about violence and terrorism taking the place of discourse?The EA's have done such insane mental damage to the population that it seems people think AI is going to leave everybody except the 1% in a permanent underclass. This is a meme, not a possible reality. This is simply not how the economy works, if everyone is poor who do you think is paying for products/services leveraging AI? If everyone is broke because all the jobs got automated, who is buying the products to supply revenue to the companies? At the root of it all AI alarmism is just incredibly first order thinking.1) Killing Sam and his family is not going to, at all, stop the wheel from turning. Even if OpenAI was somehow incapacitated without Sam (they won't be), there are 3-4 other major AI labs in the US alone.2) China is also developing at the pareto frontier. If we burn down every data center and stop all AI research, the only change is that your end of the world AI superweapon will be in the hands of a very hostile and proudly authoritarian power with no regard for your wellbeing and the worst, most well documented history of human rights violations in human history. Resolving to continue AI research is simple game theory, but basic logic and reasoning are unfortunately wholly incompatible with EA ideology.3) Opting to stop AI because you believe we will hit ASI and it will do everything and replace humans etc etc also means you are opting into homelessness, famine, cancer, climate change, etc. pretty much everything that we could solve with ASI. You believe you are stopping the trolley, but you are just changing it to the other track.
tailscaler2026: Sam eagerly pursued DoD contracts to weaponize AI. And then lobbied for legislation to ensure OpenAI cannot be held accountable if people are killed due to their systems.
Arodex: Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.
TurdF3rguson: It's like a baby on board bumper sticker. But for your house.
xdennis: > So there's one photo. Of one family.Not his family. The child was trafficked through surrogacy. The child belongs to someone, and it's not the people who purchased him.
hahahacorn: Can you explain the petty, self important, trolling manner? Which traits are intrinsically negative?Genuine Q
happytoexplain: Of Altman, Trump et al, Elon, the Nvidia guy, etc? Or am I not understanding the question?
llbbdd: Responses in this thread are embarrassing. Cat's out of the bag and needs a steward. People acting like Altman can just turn the machines off and this all stops are deluded.
klik99: Genuinely surprised at the extreme comments against sama here. I don’t think he’s a good steward of the technology, but I don’t think violence is funny or justified. I also don’t think it’s justified for him to use it to say that a negative article about him is correlated to this event. Seems to imply that an “incendiary article” led to this and that criticism is tantamount to calls to violence. He drives the conversation with apocalyptic terms, and both investors and crazy people buy into it.
0x3f: From someone Molotoving his house? What do you think he should have learned from that?
TurdF3rguson: That his security is inadequate.
dakolli: AGI will be democratized when its discovered.... just right after AWS, Microsoft and Oracle finish their 6 month beta test.
Tyrubias: Violence like this is not the answer. However, this post feels like a thinly veiled attempt at using this alarming attack to reclaim public goodwill after the New Yorker article the other day.> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.Altman wants to seem relatable and personable even though he’s one of the wealthiest and most powerful people in the world. You don’t get that option when you control a technology that has the potential to alter so many lives, especially when you just sold said technology to the US military. All the talk around democratizing AI rings hollow.The implication of Altman’s blog seems to be “stop writing critical articles about me because it will cause more violence.” However, the rich and powerful cannot use this excuse to escape objective scrutiny.
happytoexplain: A lot of what happened during the French revolution was horrific... This is such a bewildering sentence in this context. Yes, killing the rulers is horrific. Revolutions are horrific. Wars are horrific. It seems irrelevant to what the parent is (sarcastically) saying.
mjamesaustin: It was horrific. Revolutions tend to be. Yet our institutions continue consolidating money and power in fewer and fewer hands. If that doesn't stop, we'll be headed there again. It will probably be even worse this time.
kcatskcolbdi: Yes, clearly not written with his own product.
pesus: If that's the case, why doesn't he trust his own product enough to write this?
teachrdan: > the way we tackle that is with conversations, not violenceI think the breakdown here is that conversation seems to have no power. To only be a bit hyperbolic, the only language with power is money -- or violence. To the extent that ordinary people cannot make change with "conversation" (which I interpret here to mean dialog within society, including with lawmakers), they feel compelled to use violence instead.A non-rhetorical question: What recourse to non-billionaires have when conversation has less and less power, while money has more and more, and those with money are making much more money?
m4x: There's still a meaningful difference between violence wielded by a single individual who feels angry or unheard, and violence wielded by a large representative group who has invested genuine effort in conversation before collectively deciding violence is required.
d_silin: Violence is language that needs no translation. Everyone across the world, every culture, every country, every social group - from elites to homeless can converse in it using the same vocabulary.It is useful to have some degree of mastery in this discipline. Sometimes it is the only language that can deliver the important message to an unwilling listener.
Waterluvian: The thing about the rich is that they have access to sufficient levels of abstraction that they can commit terrible, disproportionate violence without it looking that way. And then fools who crave the simplistic safe comfort of moral absolutes come to their aid.Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
tyre: The reason he's saying that is because he doesn't want you to create that structure. He wants you to not create the laws or checks & balances on him because you "trust that he doesn't really want the power".It has worked for him, repeatedly.
pesus: He isn't going to suddenly grow a conscience from a riveting, intellectually stimulating conversation.
weedhopper: If the billionaire is “awake in the middle of the night and pissed”, it means you’re doing it right.
Vaslo: Everytime I read a low intelligence comment like this, I’m glad I urge my friends to vote Republican.
Vaslo: Yeah it’s like they don’t want their children murdered, crazy
happytoexplain: FYI, you started out with a very common word used to exaggerate or cherry-pick the opinions of enemies ("giddy").It's more valuable to discuss grievances than to pretend they are simply un-discussable in the wake of related violence (in the vein of "it would be disrespectful to talk about gun control in the wake of gun violence").
cuuupid: If you want to engage in good faith, scrolling up and down on the page there are dozens of comments implying Sam deserved it, that his response to this is tone-deaf, etc. when the man literally had a bomb thrown at him.We are not talking about AI safety in the wake of AI-caused catastrophe, this is a case where impressionable people have been fed insane conspiracy theories, been driven mentally insane, and are now carrying out random acts of violence. And similarly mentally ill people are cheering this on.The majority of my comment was also re: AI doomerism, and I didn't imply that we couldn't discuss this because of this incident, I explicitly stated that AI doomers are objectively mentally ill.But if you don't want to engage in good faith and play highschool debate club, it's more valuable to stay on topic than to engage in whataboutism and implied ad hominem.
happytoexplain: You might as well say it's bad to be human.What FOBO smells like, is what's happening.
hungryhobbit: Yeah, people learning new technology is terrible. /s
ambicapter: He's saying that just so he can use if another company gets bigger than OpenAI ("you can't have all the power"). If OpenAI were the top dog by a large margin, you wouldn't hear him say a peep about this (as was demonstrated by his actions with the charter).
dakolli: Knowing Sam, this entire event was fabricated or done at his behest.
copypaper: In all seriousness, what is the game plan for society moving forward as AI takes more jobs? The government doesn't seem to care. The AI labs don't seem to care.What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...I don't advocate for violence, but I do foresee more headlines like this as things get worse.
smallmancontrov: The game plan is the same as it was for globalization and previous rounds of automation: gaslight workers into thinking that they are the problem. Push all the taxes into the labor economy and all the money into the capital economy and use the inevitable budget shortfall to justify skimping on social services.
Teever: That's not true.As a defense contractor Altman is a legitimate target for a country like Iran that the US has attacked.The US is at war with many countries and has threatened to annex or invade allies.In that context Altman is 100% a legitimate target.
SpicyLemonZest: No, I don't think that's accurate. Altman has repeatedly and loudly demanded for these to be created, including a new detailed policy proposal just this month (https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440...).
lores: I've never understood this taboo against physical violence. Firing a thousand people or stealing their wages, ruining their life and their families', passing unjust laws that threaten the well-being and happiness of a million, that's ok! A punch in the nose, that's not ok!There are far worse things than physical violence against one person, and with the end of the rule of law there isn't any other recourse. The one value that is common across all cultures is that the wicked must be punished for their wickedness; expect to see violence against oligarchs and CEOs spread like fire.
dakolli: Sam had this pulled off the front page, because the whole charade obviously isn't getting him the positive attention he was looking for.
minimaxir: It most likely tripped the flame war detector heuristic (comments > points), and there is definitely a flame war here.
happytoexplain: They aren't mutually exclusive. Often the former and latter, in that order, are two parts of the same historical event.
m4x: Yes, fully agree. Nonetheless, I suspect violence can be used more effectively and more minimally if it's considered and performed by a group rather than haphazardly by individuals. I recognise that's a very simplistic view.
alekq: It’s funny how this happens the very same moment we get to read about Claude’s Mythos and a New-Yorker article. I really doubt the attacker is up to date with either…The only thing surprising here is how naive you guys are. He is a marketing&sales guy in the first place.
reducesuffering: Sam Altman has written, and probably still believes,"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."[0]This means he acknowledges that his actions have the potential to kill every human family on Earth. It should be of no surprise that people took his beliefs seriously.[0] https://blog.samaltman.com/machine-intelligence-part-1
dsa3a: Out of curiosity... why do you think this?I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero).There is no specialisation re. models at this moment in time so it is very likely to be the case.OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.
minimaxir: I didn't think Hacker News needed an explicit "calls for violence are bad" guideline but the comments here have shown otherwise.
tyre: OpenAI has also repeatedly and quietly lobbied against them.You linked a vague PDF whose promised actions are:> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.Welcoming and organizing feedback!A pilot!Convening discussions!This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.Please don't fall for this stuff.
dakolli: Its neural network autocomplete that helps you write text a little faster, chill with "the most revolutionary technology of the last decade/century" talk. You're offending a lot of experts in way more important areas of research.
moralestapia: >write text a little fasterYou might actually need to attend an AI bootcamp. This is not 2022's GPT, AI can deliver plenty of value for a business owner these days.
drivingmenuts: None of the things you believe are working out.1) Working towards prosperity, etc. - the prosperity is all going toward the top 2%. The people who need it most are not seeing it and probably never will because the only ones who guarantee a benefit are the ones with the money to direct that benefit.2) AI will be the most powerful tool, etc. - see point 1.3) It will not all go well, etc. - probably should have thought about that before you released it on the world.4) AI has to democratized, etc. - true, won't happen. See point 1.5) Adaptability is critical, etc. - Yes. Fully agree.The problem, Mr. Altman, is that you believe the rest of the world thinks like you do, which is clearly not the case at all. While we have the ability to solve so many of the world's problems, it is absolutely clear that this is not what's happening. The rich in resources are getting richer and they're not doing anything to help those poor in resources become better off. Instead, they are claiming those resources for themselves against the day that everyone else runs out.Same as it ever was, Mr. Altman. Same as it ever was.
quantified: If Sam disperses his power, we can believe him. So long as he's just concentrating wealth and power, he's just another tech bro.
AlexCoventry: > The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and *making sure democratic system stays in control.*OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?
imiric: I'm on the skeptic side of "AI" and find this entire industry obnoxious, but your argument doesn't hold any water.Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.Technology itself is inert. What humans do with technology should be regulated.IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.
SpicyLemonZest: The idea that firing you or stealing your wages is the worst a CEO can do to you is itself a product of the taboo against physical violence. There are a number of famous incidents from the late 1800s and early 1900s, when the taboo was weaker, of CEOs sending private armies to shoot inconvenient labor movements. It's not an equilibrium you should defect from lightly.
lores: A CEO can choose physical, mental, legal or financial violence against the common man. The common man only has the choice of physical violence. A taboo on it makes him impotent.
joshcsimmons: This is both horrible and not at all surprising.Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.Still horrible and not right.
BloondAndDoom: Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?
Ms-J: "It's never OK to physically attack someone like this. Full stop."Why?If they are causing such pains on humanity, wouldn't it be better if they didn't exist?
xvector: We'd be stuck in the Stone Age with your mentality.
gverrilla: this is probably orchestrated by sam altman himself or one of his lackeys
mindslight: [delayed]
truncate: >> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a modelThe question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.
kylecazar: I think it's exacerbated by the internet. It didn't invent the idea of a proud/unapologetic asshole, but it amplified their reach and emboldened them.There is always an audience online for whatever you have to say if you're famous, and attention is always good. There will be tens of thousands of people vocally cheering on your least popular and most controversial takes, however fucked they are. And then people lose themselves in that bubble (Elon).
eddyfromtheblok: Ronan Farrow, one of the journalists who worked on this article, talked to Katie Couric on her YouTube channel about this. They worked on this across ~18 months. I thought this interview was illuminating.
AlexCoventry: Yes, it was good. It seems clear that Farrow and his co-author approached it in a methodical, fair-minded way.https://www.youtube.com/watch?v=wr_sB1Hl0oM
mememememememo: "Like this" is doing some serious work in that statement!
andrewjf: > Those who make peaceful revolution impossible will make violent revolution inevitable."- John F Kennedy, 1962.
stale2002: > what is the game plan for society moving forward as AI takes more jobs> What happens when more and more people can't afford housing, kids, food, health insurance, etc.?What about when the opposite of this all happens, society massively benefits, and unemployment rates stay about what they have always been?Will people still be yelling about the doomsday of societial collapse that has failed to materialize every single time?
jlebar: Assuming this is a serious question, here are some ideas you could read about!- https://en.wikipedia.org/wiki/Vigilantism- https://en.wikipedia.org/wiki/Law- https://en.wikipedia.org/wiki/Bill_(law)- https://en.wikipedia.org/wiki/Trial
AlexCoventry: I don't condone violence, but the contract he's signed with the US military is a credible threat to everyone in the US. OpenAI will now certainly be called on to assist in domestic mass surveillance, under threat of the kind of severe penalties Anthropic has faced. So why did he agree to that contract, unless he's will to provide that assistance? So it's gone well beyond conversation, though not to a point where violence is appropriate. Boycotts and hostility are definitely appropriate at this point IMO, though.
adi_kurian: It's sad that loony tunes conspiratorial thinking sounds all the more credible. What a time to be alive.
onemoresoop: How about the economic impact of all the over investments in AI? It’ll all be dumped on us all Im afraid.
dsa3a: Thats a separate issue. lets stick to the issue re. labour
slater-: Turns out the article was not in fact incendiary.
pesus: I find it interesting that Altman's fans seem to keep skipping past this fact. I'd love to hear their defense as to why one person potentially being responsible for hundreds or thousands of deaths is acceptable, but attacking that one person isn't. If violence is never the answer, they should be condemning Altman with even more vigor.
IMTDb: > why one person potentially being responsible for hundreds or thousands of deaths is acceptableI am not sure who exactly is that one person ? Is it Altman, who is according to many people not that knowledgeable in AI in the first place; the scientist who found a breakthrough (who is it ?); is it the president of the United States who is greenlighting the strikes; the general who is choosing the target (based on AI suggestions); the missile designer; the manufacturer; the pilot who flew the plane ?I get the point of concentrating power in fewer hands, but the whole "all the problems of this world are caused by an extremely narrow set of individuals" always irks me. Going as far as saying there is just one is even mor ludicrous.
jazzyjackson: this is the mentality of the modern age, as shaped by america and all empires before her, e.g. supreme leader khomeini no longer exists because the man americans voted for as head of the armed forces decided it would be better this way.
jibal: So he spends a few seconds writing something generic about his family and then uses that as a platform for a bunch of personal PR. That's sociopathy.
xvector: You're cooked.
tinyhouse: They own the best models and will probably keep owning the best models for a while. They have much more compute now and more data to keep improving their models on many tasks. Open source won't close the gap in 6 months. They are also trying to block other companies from distilling their models [0].[0] https://www.anthropic.com/news/detecting-and-preventing-dist...
kbelder: Sure, he's sleazy. Doesn't matter. It's not ok to firebomb jerks or saints. Rich or poor. It's both a criminal and an immoral act.
BloondAndDoom: This question doesn’t apply to Sam, but since you made a general statement, I’m trying to understand.When it comes to people who openly incite or directly use violence. why do you think it’s unethical to attack someone like that? If one responsible from directly or indirectly killing hundreds, what’s the ethical argument to not use violence against that person?Not trolling or anything I’ve been just thinking about this for a while and trying to understand what am I missing in this argument.
xvector: We'd have never progressed as a species with your mentality. Change is painful and it's part and parcel of progress.Humans would be suffering far more today if we weren't willing to accept short term pains for progress.
lores: Change and progress like the people of France deciding they had enough of injustice and nobles' impunity, then? A little short-term pain for social progress? We agree.
tyleo: I suppose most just haven’t seen the Chinese models in practice. I haven’t. I was skeptical of AI coding until using Claude Code in February. I saw and I believed. I’ve only done that with Google, OpenAI, and Anthropic’s models so far.
Noaidi: We’re in the middle of slaughtering two civilizations and you think we’re not in the Stone ages?
Noaidi: Concerning non-violence: it is criminal to teach a man not to defend himself when he is the constant victim of brutal attacks.Malcolm XThere’s a whole bunch more here if you’re interested.https://www.azquotes.com/author/9322-Malcolm_X/tag/violence
gverrilla: > The only thing surprising here is how naive you guys are.Is it really, though? I could have bet money that would be the case. HN crowd is very gullible.
adi_kurian: Was solid until the word 'democratized'.
0xy: Incendiary and false headline aside, no sane person would suggest that a hardware store that sold an axe that was used by an axe murderer should be held liable unless that store knew what was about to unfold.Unless AI companies knowingly participate in murder plots, they should not be liable.Is Microsoft liable for providing Notepad, a product which can be used to write detailed and specific mass murder plots?Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?Liability should depend on your participation in the event, of course. Otherwise you wouldn't be able to buy an axe, or a car, or use the internet at all. A closer analogy is ISPs not being liable for copyright infringement done by users, and subsequently not being required to police such activity for rights holders.
Ms-J: Exactly.People don't need to act like a slave.Make your own decisions in life.
hahahacorn: Of Altman in this blog. Put another way I didn’t read those traits from this post and I’m curious what I’m missing.
stavros: Are calls for violence bad when you're calling for throwing a molotov cocktail at a child? At an adult? At a serial killer? At someone who's about to shoot you unprovoked? At someone who murdered your family? At someone who's about to?If you said "yes" to all of the above, I'd love to know your reasoning.
lostlogin: The general tone here is that freedom of speech is absolute and nothing should curtail that.Not my personal view.
Ms-J: His face screams bullshit. If I ever need to laugh, I look at people like him or Elon.
stavros: The Chinese models are distilled from GPT and Claude, so it's not like China would pull ahead if those companies went away for six months. They really are at the forefront of innovation right now, as much as I hate to think of the consequences of this (a single company owning a superintelligence is basically a nightmare scenario for me).
largbae: Don't worry, if someone truly achieves superintelligence it won't be controlled by anyone for long.
neya: Two words: Delusion and overconfidence."You're absolutely right!" Right after fucking up my entire codebase isn't anywhere near AGI, let alone "having the power to control it"
alpaca128: He doesn't trust it for anything else either as far as I can tell. In an interview he's boasted about how he uses a paper notebook for everything all day.
imiric: > We have to get safety right, which is not just about aligning a model—we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future.This might be the greatest example of cognitive dissonance I've seen in years. I can't understand how someone who's clearly highly intelligent can express this opinion, while doing the complete opposite. Does he think that everyone is a fool and that nobody will notice? Is this some form of gaslighting? Unbelievable.Violence is not the answer, but it's easy to see how the public persona and actions of this individual would push someone to do this. There are certainly disturbed people who don't need any logical reason for violence, but maybe it would help if Sam stopped being so damn dishonest and manipulative. Even this post that is intended to gain sympathy ends up doing the opposite.As a sidenote, I wish we would stop paying attention to these people. A probablistic pattern generator is far from the greatest technology humanity has ever invented. Get off your high horse, stop deluding people, and start working with organizations and governments to educate people in understanding and using this tech instead of hoarding power and wealth for you and your immediate circle of grifters.
GMoromisato: The entire purpose of government is to have a monopoly on violence. Democracies give their government the power to decide when and against whom to deploy violence.There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
pesus: I'm not sure the next batch of schoolgirls getting bombed will particularly care whether the choice was made "democratically" or not.I also won't particularly care about the distinction when AI is inevitably used to enact violence on the US population.
lostlogin: > Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.Really? I don’t know how many were in his house but at most it’s attempted murder of a few versus killing 150.I see a difference.US law sees a difference too. The person that threw the firebomb will get the full weight of the law if they are caught, and spent an awfully long time in prison.Those that killed the school girls will never face punishment.
avs733: If we are going to say violence isn’t okay then it is important that we be clear about the boundaries of what we define as violence.Theft is a nice analogy here. The default model of theft is property crime but the largest type of theft is wage theft.If we fret about violence done against individuals but not violence against groups our attention is going to end up steered in a narrow direction.
GeoAtreides: what are you arguing? that people should not violently overthrow their corrupt leaders? that the french should've let the Ancient Regime entrench and continue? That the serfs (slaves) in tsarist Russia should've stayed put and not revolt against the corrupt and incompetent Nicholas II? Or that the Hungarians and Czechoslovaks not revolt against the totalitarian regimes propped by the Russians? Should've the Romanians in 1989 stayed at home, in cold and hunger, and let Ceausescu regime continue to cruelly oppress them?
isodev: > just their usual marketingI think that’s a very common element for most US tech corps. Apple, Google, Microsoft, Meta, X etc - they’re all “making a dent in the universe”. It’s unfortunate when their employees and CEOs loose track of the line that separates marketing from reality
atbpaca: I have many disagreements with Sam Altman. But physical attacks are never the answer. Especially attacking one's family.
BloondAndDoom: I need to check benchmarks on the models, I wonder what the benchmarks are saying in terms of how closely models tracking these frontiers. —on my mobile at the momentWhen it downs compute power I assume you are referring to power to training and interference. Then is it more about training gap will get wider and wider ? Is that the assumption, I know there limited GPUs etc. But I’m having hard time to believe to the idea of China cannot catch up. Even if the gap is 12 months I’m struggling to see what that means in practice? Is that military advantage, economical, intelligence? It still doesn’t explain and whatever the advantage is, aren’t we supposed to see that advantage today? If so, where is it? What’s the massive advantage of USA because of OpenAI and Anthropic?
zoklet-enjoyer: TIL Sam Altman is gay
arduanika: https://xkcd.com/1053/
richardlblair: Jfc. People, a molitov cocktail was thrown as his home.The rest of what is written doesn't matter. This isn't the moment for that conversation. That's his family. He has a fucking child.Holy shit.
matheusmoreira: Can't say I feel sorry for the guy. Anyone who actually believes his platitudes about "democratizing" AI is far too naive. If he really believed that, he'd make a torrent out of ChatGPT's weights and upload it to the pirate bay.The fact of the matter is these AI CEOs are actively trying to economically disenfranchise 99% of the human race. The ultimate corollary of capitalism is that people who aren't economically productive need not be kept alive any longer. If this doesn't radicalize people into actual violence, I simply have no idea what will.
b8: We still haven't made AGI, so I don't understand what he's saying they did.
onemoresoop: How would society benefit if all the benefit collects to the top of the pyramid? Same old trickle down? The technology isn’t inherently bad but if it comes with massive unemployment and creates social unrest while a few at the top profit… That’s what is what makes me uncomfortable.
stavros: That's my other nightmare scenario :P
georgemcbay: Just imagine how inexpensive paperclips will become, there is always a silver lining.We will finally have achieved abundance.
stavros: Not just abundance, we will have the maximum amount of paperclips possible.
akramachamarei: It's an interesting question. Here's my reductive, off-the-cuff take: violence is justified when defending oneself or another from imminent bodily harm, or even under threat of imminent, considerable property damage. When a threat is not imminent, or an action is past, we use the police and the courts, because we as a society–in the sense of subscribers of the US constitution or similar tracts–believe that it is better to have a judicial system and impartial officials determine whether it is worth depriving someone of their bodily liberty or taking their property, that is, jailing or fining. Taking some sort of extrajudicial action or applying corporal punishment (!) requires a much higher bar. How and when would one determine that the judicial system is so unreliable as to morally permit vigilantism? It requires a great deal of moral self-confidence to take matters into one's own hands.I focus on the question of vigilantism because that I think is the issue. Many people feel an emotional impulse, that they want to side with the CEO killer, for example, and they find ways to rationalize. What I'd say is, if you think Joe Blow is so evil , why don't we take him to court? What kind of possible actions could we not jail or fine him for but for which we would accept Johnny Anarchy, y'know, igniting his lawn furniture? Of course, the justice system is imperfect, but nobody lawfully elected the next sexy assassin as judge, jury, and executioner.
richardlblair: Using an article about a home housing a child being firebombed to platform your irrelevant opinions about the victim is a bad look.
tyre: It's pretty amazing to observe people experience the past ten years in American history and continue to think that we can out-talk the bad people in the world.Michelle Obama's, "When they go low, we go high", is some of the stupidest political advice and a generation has lost so much because of it. (The generation before got West Winged into believing the same thing.)When you look to the right, you have a stolen election in 2000, a stolen supreme court seat, an attempted coup, and relentless winning despite it.
lostlogin: This may come right when Americans see themselves backsliding relative to other power blocks, and allies turning away. It’s started.But it seems a distant hope at best.
Chance-Device: I think what you’re describing is a more general race to the bottom where everyone loses, including the AI companies.This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.Generous of them, really.
dsa3a: No Im not describing a race to the bottom. Im saying that its in Google's best interest to ensure Anthropic and OAI do not continue to operate as a going concern and generate enough cash flows to finance reinvestment - by providing a very competitive offering.Price of tokens is one competitive-instrument for them to achieve that but not the only one - they offer a whole lot more to enterprises that OAI and Anthropic don't.By doing so Anthropic and OAI's valuations go crashing into the ground along with future prospects of raising funding externally.
notyourwork: > OpenAI has abandoned its open source roots.It was only a matter of time. The font on the dollar sign kept increasing, eventually selfish humans will always crack. Keeping it open had to be instilled with it becoming a public utility. Private companies don't do altruistic things unless they benefit.
davesque: Wouldn't it be more correct to call the article "critical" and not "incendiary"? I looked it over and I don't remember seeing any calls to violence. Altman needs to remember that he holds an incredible amount of power in this moment. He and other current AI tech leaders are effectively sitting on the equivalent of a technological nuclear bomb. Anyone in their right mind would find that threatening.
h14h: "Critical" even feels strong. The article was essentially a collection of statements others have made about Sam.
stavros: Yeah, it's one thing to write an incendiary article, it's a very different thing to write an objective article about someone who will say anything to get what they want.
w10-1: I appreciate his post and his tone.No one should need to attack (on the one hand) or "trust" (on the other) Sam Altman (or Donald Trump or Barack Obama).Power is reliance by others, and that's conditioned on behaviors which are made observable and systems to ensure stakeholders' interests are maintained. Yes, there's some hero-worship, some arbitrary private power, some evasion of systems, and some self-dealing by leader coalitions (indeed, we seem to be at a historical peak), but that's not about him personally but about us, and our willingness to vote (writ large).We do have to be careful about private power saying managing their issues are a matter for public governance (democratic or otherwise). It's a bit convenient to deflect blame (like having it be the jury that "decides" a case, because then you can't blame the judge). I like that Anthropic stepped up to pay any electricity increases, Apple has been recycling and cleaning up their supply chain, etc. If anything there should be a stronger support for contributing vs. Hobbesian corporations.
isodev: I think that’s the realm of conspiracy theories. There are also not only Chinese alternatives- Mistral in Europe is doing pretty good in several categories they’ve opted to focus on.This kind of reiterates the parent’s question I think - people are maybe too focused on the gpt/claude model and forget about all the other ways of using the tech.
stavros: Is it? I thought it was pretty well established that open models were distilled from the proprietary, frontier ones. Maybe I'm wrong.
airstrike: [delayed]
richardlblair: Hes attempting to humanize himself in hopes his family home where his child lives isn't firebombed. Again.Very reasonable response when you take a step back.
rootusrootus: If you want to draw that distinction, then don't you need to account for intent? I don't think the USG intended to bomb a school. The guy throwing a Molotov cocktail has even less claim to it being an accident.
xvector: Look where France is now. Can't afford their own retirement.
throwatdem12311: I don’t this will do much to help his image.They had to stop putting Luigi Mangione in the media because public sentiment was not going the way they expected.
DoneWithAllThat: Who is “they”?
ghshephard: Do any of the open weight models from smaller labs exist if they can't distill from the SoTA models that are throwing billions of dollars of compute into pretraining?
dmitrygr: > There was an incendiary article about me a few days ago [...]That is a lot of words, none of which state or claim the article was in any way inaccurate. Curious, that
chihuahua: There will be a blinding flash which signals the superintelligence singularity. When the smoke clears, you'll see a 50-foot tall Altman/Borg hybrid. He is about to destroy humanity with his death ray. Suddenly, a 50-foot tall Musk/Borg hybrid appears out of nowhere, and stops Altman just in time. Then they work together to destroy all humans.
IAmGraydon: The guy is either mentally unwell or grifting. Most likely the latter.
therealpygon: Especially when Google is in the far better position to come out ahead…imo.
akramachamarei: Envy is a deadly sin for a reason
presides: >“Once you see AGI you can’t unsee it.” It has a real "ring of power” dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI”. The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring.The analogy has 2 simple rules and you can't even follow them:#1 It MUST be destroyed.#2 SOMEONE has to have the ring until then.Without BOTH of those things you have no meaningful analogy. If we're being super charitable, "For no one to have the ring" is Frodo sitting at the council, with the ring on the table, naively thinking that it can stay right there in that spot forever, safe in Rivendell, about to have the horrifying revelation that there are 2.5 more books in the story. More realistically, it's Boromir moments later arguing that Denethor has the mandate to use it to fight on Gondor's behalf.Fuck. I'm so past the point of caring about the extinction of our species, or your role in enslaving us to our robot overlords or whatever... but SELLING US SPECIOUS RING ANALOGIES IS WHERE I DRAW THE FUCKING LINE
jesse_dot_id: Not that I excuse this behavior, but it's expected is it not? He's claimed to have built the replacement for human labor while participating in the regulatory capture that ensures that process screws the affected parties out of any effective recourse.He's stood atop a soapbox, in earshot of everybody, and shouted to the corporations that because of him, they can now fire hundreds of thousands — millions — of people with impunity. It doesn't matter that it's not true and that the firings are probably not actually due to AI. But he's standing in front of them and providing the cover.He's a marketing guy. He made himself the face of AI. His message out of the gate was that it was going to replace human workers. What did he think was going to happen?It's like all of these people think that humanity has evolved out of the collective rage spirals that powered political revolutions in the 1500's, 1600's, 1700's — every 100's. Nope. It's always still there. We've had a middle class for awhile to mask it but it's being hollowed out and when it collapses completely, that ugly and ever-present human urge to eat the rich will rage right back to the surface again. Yet, they all seem to be apt to fight to be first in line to be the face of injustice during a volatile period for some reason.It's kind of baffling but also interesting to witness.
grafmax: An oligarch who promotes “democracy”. Is trying to cynically ingratiate himself, or is he really that deaf to the irony?
llbbdd: I think it's as realistic as it is simplistic. The State gets a monopoly on violence so that you can sue someone who wrongs you instead of killing them. When conversation and cash fail, violence is all that's left, and we concentrate that power in groups of people tasked with deciding when the alternatives have failed. It doesn't always work but it's a better alternative than the individualized bloodlust disappointingly endorsed elsewhere in this thread.
AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.
kelnos: > AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.What a bullshit thing for someone who is not actually democratizing access to AI to say.
maplethorpe: Maybe they're about to open source their weights?