Discussion
Anthropic CEO Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies,’ report says
6Az4Mj4D: Leaving autonomous weapons aside, how does Anthropic justifies that they signed up with surveillance company Palantir and now raising concerns for same surveillance with DoD?It doesn't match.
ekjhgkejhgk: It might match. The red line was domestic surveillance. You don't know what deal they had. Giving Anthropic the benefit of the doubt, perhaps Palantir said "Deal, we won't use your tool domestically".
twtw99: Running a for profit company that actually sold a rails off version to DoD first and then taking a moral high ground is ridiculous.No single company should have the position of deciding what is right tbh.
dmix: > signed up with surveillance company PalantirJust to nitpick, Palantir isn't doing surveillance like Flock. They do data integration the way IBM does under contract for the governments. Some data pipelines include law enforcement surveillance data which get integrated with other software/databases to help police analyze it. There's no evidence they are collecting it themselves despite recent headlines. It's a relatively minor but important distinction IMO.https://www.wired.com/story/palantir-what-the-company-does/
freejazz: It's just marketing.
6Az4Mj4D: What does he mean in the last paragraph> Amodei wrote to his staff. “It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees.”
kartika848484: its to poach themanthropic has the least attrition rateand yesterday an openai employee left already and joined anthropic
df2dfs: OAI is on track to sit in the same category as Palantir as a brand and pretty much going to either work with Palantir or compete with them for the precious funding from the govt.I know most of you here dont quite have the imagination to see it. But feel free to screenshot my post and lets talk in a year ;)
tbrockman: Whether you disagree with whether it truly aligns with their stated values, in their partnership with Palantir (making Claude available within their AI platform) they requested consistent restrictions:> “[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency’s willingness to engage in ongoing dialogue,” Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to “substantially increase the risk of catastrophic misuse,” show “low-level autonomous capabilities,” or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.Source: https://techcrunch.com/2024/11/07/anthropic-teams-up-with-pa...
VK-pro: I’m skeptical of your username and the fact that you commented twice in 23 minutes, ~10 minutes apart ala the dead internet theory. But isn’t this a fairly simple statement? He hopes that the folks at OpenAI are not as gullible as the “Twitter morons.”If youve spent even a small amount of time with llms you’ll know that these security measures are just window dressing.
conartist6: Super sus; commenter is probably Sama in disguise.
vldszn: I built a website that shows a timeline of recent events involving Anthropic, OpenAI, and the U.S. government.Posted here: https://news.ycombinator.com/item?id=47195085
trinsic2: They are providing the software to do surveillance, They are definetly bad actors, you can dance around this all you want, but they are in it.
trinsic2: This exchange between Anthropic and OpenAI feels a lot like theater. If I was really trying to stop abuses I wouldn't going out of my way to talk about it. The "public sees us as the hero's" bullshit feels like a smoke screen. Id keep silent and let the public do the math and not get involved.
spaghetdefects: Thank you. Anthropic also is culpable in the illegal war against Iran that started with the bombing and murder of an entire girls school.https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...
behnamoh: Neither Anthro nor ClosedAI are trustworthy. Local AI all the way. And when I say local, I mean Apple Silicon; I don't like to contribute to Nvidia's monopoly either (fuck "buy a GPU"; the guy is an Nvidia-sponsored "influencer").
KnuthIsGod: Meanwhile Anthropic has no issues with helping Palantir...HypocrAIsy...
EA-3167: It’s all a nest of vipers, and frankly the idea of having sympathy for any of them is physically repellent.
sigmar: Why do you assume the contract with palantir doesn't have similar terms? Weird assumption.
Madmallard: They are all guilty.
clipsy: > They do data integration the way IBM does under contract for the governmentsGood thing IBM's data integration was never used for ill!Oh, wait https://en.wikipedia.org/wiki/IBM_and_World_War_II
elevation: The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.
_alternator_: Anyone have a link to the full text of the letter?
etchalon: "Person says its raining when its raining."
zug_zug: Great, well deepseek is free for most use and certainly won't be helping the US military any time soon. Since you aren't paying them you aren't really supporting anything bad they may do down the line.
bryant: > The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.Anthropic is a Public Benefit Corporation chartered in Delaware, with an expressed commitment to "the responsible development and maintenance of advanced AI for the long-term benefit of humanity."So in theory (IANAL), investors can't easily bully Anthropic into abandoning their mission statement unless they can convince a court that Anthropic deliberately aimed to prioritize the cause over profit.
estearum: Not hypocritical at all if you knew what Palantir actually does
cm2012: Good for Anthropic. Even AI at its current state has pretty scary surveillance capabilities.
taurath: Every single time the box is flipped over, whats inside is "more domestic surveillance". Who in their right mind would give the benefit of the doubt?
gjsman-1000: Nice assertion. Please provide citations, substance, or anything other than “you’re wrong definitely.”
GranPC: I found a copy on this website: https://www.teamblind.com/post/darios-email-to-anthropic-att...I don't know how reliable that source is. In any case, here's the text from that link, for posterity:"I want to be very clear on the messaging that is coming from OpenAI, and the mendacious nature of it. This is an example of who they really are, and I want to make sure everything sees it for what it is. Although there is a lot we don’t know about the contract they signed with DoW (and that maybe they don’t even know as well — it could be highly unclear), we do know the following:Sam’s description and the DoW description give the strong impression (although we would have to see the actual contract to be certain) that how their contract works is that the model is made available without any legal restrictions ("all lawful usee") but that there is a "safety layer", which I think amounts to model refusals, that prevents the model from completing certain tasks or engaging in certain applications."Safety layer" could also mean something that partners such as Palantir tried to offer us during these negotiations,which is that they on their end offered us some kind of classifier or machine learning system, or software layer, that claims to allow some applications and not others. There is also some suggestion of OpenAI employees ("FDE’s") looking over the usage of the model to prevent bad applications.Our general sense is that these kinds of approaches, while they don’t have zero efficacy, are, in the context of military applications, maybe 20% real and 80% safety theater. The basic issue is that whether a model is conducting applications like mass surveillance or fully autonomous weapons depends substantially on wider context: a model doesn’t "know" if there’s a human in the loop in the broad situation it is in (for autonomous weapons), and doesn’t know the provenance of the data is it analyzing (so doesn’t know if this is US domestic data vs foreign, doesn’t know if it’s enterprise data given by customers with consent or data bought in sketchier ways, etc).The kind of "safety layer" stuff that Palantir offered us (and presumably offered OpenAI) is even worse:our sense was that it was almost entirely safety theater, and that Palantir assumed that our problem was "you have some unhappy employees, you need to offer them something that placates them or makes what is happening invisible to them, and that’s the service we provide".Finally, the idea of having Anthropic/OpenAI employees monitor the deployments is something that came up in discussion within Anthropic a few months ago when we were expanding our classified AUP of our own accord. We were very clear that this is possible only in a small fraction of cases, that we will do it as much as we can, but that it’s not a safeguard people should rely on and isn’t easy to do in the classified world. We do, by the way, try to do this as much as possible, there’s no difference between our approach and OpenAI’s approach here.So overall what I’m saying here is that the approaches OAI is taking mostly do not work: the main reason OAI accepted them and we did not is that they cared about placating employees, and we actually cared about preventing abuses. They don’t have zero efficacy, and we’re doing many of them as well, but they are nowhere near sufficient for purpose. It is simultaneously the case that the DoW did not treat OpenAI and us the same here.We actually attempted to include some of the same safeguards as OAI in our contract, in addition to the AUP which we considered the more important thing, and DoW rejected them with us. We have evidence of this in the email chain of the contract negotiations (I’m writing this with a lot to do, but I might get someone to follow up with the actual language). Thus, it is false that "OpenAIs terms were offered to us and we rejected them", at the same time that it is also false that OpenAI’s terms meaningfully protect them against domestic mass surveillance and fully autonomous weapons.Finally, there is some suggestion in Sam/OpenAI’s language that the red lines we are talking about, fully autonomous weapons and domestic mass surveillance, are already illegal and so an AUP about these is unnecessary. This mirrors and seems coordinated with DoW’s messaging. It is however completely false. As we explained in our statement yesterday, the DoW does have domestic surveillance authorities, that are not of great concern in a pre--AI world but take on a different meaning in a post-AI world.For example, it is legal for DoW to buy a bunch of private data on US citizens from vendors who have obtained that data in some legal way (often involving hidden consents to sell to third parties) and then analyze it at scale with AI to build profiles of citizens, their loyalties, movement patterns in physical space (the data they can get includes GPS data, etc), and much more.Notably, near the end of the negotiation the DoW offered to accept our current terms if we deleted a specific phrase about "analysis of bulk acquired data", which was the single line in the contract that exactly matched this scenario we were most worried about. We found that very suspicious. On autonomous weapons, the DoW claims that "human in the loop is the law", but they are incorrect. It is currently Pentagon policy (set during the Biden admin) that a human has to be in the loop of firing a weapon. But that policy can be changed unilaterally by Pete Hegseth, which is exactly what we are worried about. So it is not, for all intents and purposes, a real constraint.A lot of OpenAI and DoW messaging just straight up lies about these issues or tries to confuse them.I think these facts suggest a pattern of behavior that Ive seen often from Sam Altman, and that I want to make sure people are equipped to recognize:He started out this morning by saying he shares Anthropic’s redlines, in order to appear to support us, get some of the credit, and not be attacked when they take over the contract. He also presented himself as someone who wants to "set the same contract for everyone in the industry" — e.g. he’s presenting himself as a peacemaker and dealmaker.Behind the scenes, he’s working with the DoW to sign a contract with them, to replace us the instant we are designated a supply chain risk. But he has to do this in a way that doesn’t make it seem like he gave up on the red lines and sold out when we wouldn’t. He is able to superficially appear to do this, because (1.) he can sign up for all the safety theater that Anthropic rejected, and that the DoW and partners are willing to collude in presenting as compelling to his employees, and (2.) the DoW is also willing to accept some terms from him that they were not willing to accept from us. Both of these things make it possible for OAI to get a deal when we could not.The real reasons DoW and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot), we haven’t given dictator-style praise to Trump (while Sam has), we have supported AI regulation which is against their agenda, we’ve told the truth about a number of AI policy issues (like job displacement), and we’ve actually held our red lines with integrity rather than colluding with them to produce "safety theater" for the benefit of employees (which, I absolutely swear to you, is what literally everyone at DoW, Palantir, our political consultants, etc, assumed was the problem we were trying to solve).Sam is now (with the help of DoW) trying to spin this as we were unreasonable, we didn’t engage in a good way, we were less flexible, etc. I want people to recognize this as the gaslighting it is.Vague justifications like "person X was hard to work with" are often used to hide real reasons that look really bad, like the reasons I gave above about political donations, political loyalty, and safety theater. It’s important that everyone understand this and push back on this narrative at least in private, when talking to OpenAI employees.Thus, Sam is trying to undermine our position while appearing to support it. I want people to be really clear on this: he is trying to make it more possible for the admin to punish us by undercutting our public support. Finally, I suspect he is even egging them on, though I have no direct evidence for this last thing.I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!). Itis working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees.Due to selection effects, they’re sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees."
pfisherman: This is very easy to explain. Anthropic outlines some limitations in their terms of service. Palantir accepted those terms. The DoD did not.OpenAI claims their terms of service for DoD contain the same limitations as Anthropics proposed service agreement. Anthropic claims that this is untrue.Now given that (a) the DoD terminated their deal with Anthropic, (b) stated that they terminated because Anthropic refused modify their terms of service, and (c) then signed a deal with openAI; I am inclined to believe that there is in fact a substantial difference between the terms of service offered by Anthropic and OpenAI.
gjsman-1000: Basically it’s glorified Excel.Take it out on the database purveyors, not Palantir.
ImPostingOnHN: I think a company which provides a sensor fusion dragnet for a government-run mass domestic civilian surveillance system is at least as culpable (and odious) than the ones supplying the data.
Loquebantur: “We’ve actually held our red lines with integrity rather than colluding with them to produce ‘safety theater’ for the benefit of employees (which, I absolutely swear to you, is what literally everyone at [the Pentagon], Palantir, our political consultants, etc, assumed was the problem we were trying to solve),” Amodei reportedly wrote.“The real reasons [the Pentagon] and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot),” he wrote, referring to Greg Brockman, OpenAI’s president, who gave a Pac supporting Trump $25m in conjunction with his wife.https://www.theguardian.com/technology/2026/mar/04/sam-altma...
mrandish: When @sama announced within hours that OAI was replacing Anthropic with the "same conditions ", it was clear that either the DoW or OAI (or both) were fudging. DoW balked at Anthropic's conditions so OAI's agreement must have made the "conditions" basically unenforceable.And sure enough, my reading of it left the impression the OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."
nickthegreek: https://gizmodo.com/palantir-ceo-says-a-surveillance-state-i...
conradev: It is an important distinction.It’s the same with Facebook selling user data. Neither selling your data, like the carriers do, or selling the ability to target you with your data, like Facebook does, are very nice. But legally they are separate things that need to be regulated differently. As is the case with Flock and Palantir.
stingraycharles: Yeah, it never made sense when Sam immediately said that they had the same constraints yet de DoW immediately agreed with that.From what I can see, OpenAI’s terms basically say “need to comply with the law”, which provides them with plenty of wiggle room with executive orders and whatnot.
SirensOfTitan: Like others have already mentioned: I think Anthropic's relationship with Palantir undermines Amodei's narrative here. It actually feels like Dario is playing Sam's game better than Sam is.Those who know better please correct me. My current understanding of Palantir (and other surveillance tech companies like Peregrine) is:1. They facilitate the sale of data to law enforcement, enabling the government to circumvent fourth amendment protections.2. They fuse cross-government agency data through Foundry and fuse them into unified profiles which the government can use to surveil and pressure citizens without probable cause or a warrant.ICE also uses a Palantir tool called ELITE to build deportation target lists.
bko: Call me crazy, but I don't think a private corporation should have veto power of what a government agency can do with their product if its within the law. They can choose not to sell to government agencies, that's fine, but to demand some kind of assurances that they're using it as per Anthropic's own ever changing moral compass seems like an insane overreach for a private corporation. We still believe in democracy, right?
Spooky23: [delayed]
bigyabai: Iunno, this seems pretty dystopian to me: https://www.eff.org/deeplinks/2026/01/report-ice-using-palan...
charcircuit: The government knowing where you live is neither surveillance nor dystopian.
_jab: Sure, but it's not as if the DoD was planning on using Anthropic to _collect_ the data either? I assume that the hypothetical DoD use case Anthropic shied away from dealt with the processing of surveillance data, just like what Palantir does.
paxys: Sam Altman would lie? Nooo
aeon_ai: I get the sense that OpenAI is astroturfing “outrage and hypocrisy” in this thread.The dead internet is alive and well.
labrador: They are on X as well
sakesun: > it was clear that either the DoW or OAI (or both) were fudging.This is my first thought as well. It's too obvious. He should have consulted ChatGPT before the announcement.
mullingitover: > if its within the law.The current administration has been caught flouting court orders in dozens of cases, to the point that courts are no longer even granting them the assumption that they’re operating in good faith.I can think of a million good reasons not to give these people the tools to implement automated totalitarianism. Your proposal that they simply refuse service to the government entirely would be ideal.
roywiggins: https://www.washingtonpost.com/technology/2026/03/04/anthrop...> The military’s Maven Smart System, which is built by data mining company Palantir, is generating insights from an astonishing amount of classified data from satellites, surveillance and other intelligence, helping provide real-time targeting and target prioritization to military operations in Iran, according to three people familiar with the system...> As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people.
jheimark: That is crazy. You are suggesting that corporations should have no power over their own IP.Are you really saying that if Anthropic sells a limited version of their product to Palantir at a certain price, the government should be able to demand access to an unlimited version of Anthropic's product for free because they are a customer of Palantir?That would effectively mean the government gets an unlimited license to all IP of companies that do business with government suppliers... that would be terrible.
creddit: He has to know that this would leak and it makes him look really bad. This is going to be a meaningful, unforced error.
df2dfs: What's there to discuss? OAI is seeking a hand-out from the govt to save their asses. They (Sam + top-management) see the writing on the wall and need help.
Spooky23: This. The OpenAI grift is to make itself too big to fail. They are playing a game of chicken ahead of the election circus. Trump must keep the market alive until November. Nvidia, Micron, Oracle, Microsoft are cooked when and if they pop.
bigyabai: That depends very much on how they use and disseminate that information.
SirensOfTitan: Their data integration and sale allows for the government to surveil citizens without probable cause or warrants.
trinsic2: It feels more like the are playing good cop/bad cop... There is just something indifferent about all of this that makes me wonder.
felipeerias: Are you sure about that? Every information I’ve seen suggests that the DoD has been using Anthropic’s models through Palantir.My understanding is that Anthropic requested visibility and a say into how their models were being used for classified tasks, while the DoD wanted to expand the scope of those tasks into areas that Anthropic found objectionable. Both of those proposals were unacceptable for the other side.
trinsic2: Wow... See. I didn't even know it was this bad. You don't need much to silence these people that are supporting authoritarian collaborators.
lesuorac: I always just say Palantier is IBM 2.0IBM of course has an problematic history.
hintymad: Honest question: why do people automatically equate "fully autonomous weapons" to something like killer robot? My immediate reaction is that even the best-in-class rapid-fire gun has a hard time identifying and tracking drones. So, we'd need AI to do better tracking, which leads to a fully autonomous weapon. And I really don't get why that's a bad thing.Of course, a company should have freedom to choose not to do business with the government. I just don't think automatically assuming the worst intention of the government is not as productive as setting up good enough legal framework to limit government's power.
cheema33: > OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."I believe this understanding is correct. The issue many people have these days with Dept. of War, and most of Trump admin is that they have little respect for laws. They only follow the ones they like and openly ignore the ones that are inconvenient.Dept of "War" should have zero problems agreeing to the two conditions Anthropic outlined, if they were honest brokers. But I think most of us know that they are not. Calling them dishonest brokers seems very charitable.
reactordev: I haven’t seen them follow a law yet
websight: Who, Amodei? This makes him look the opposite of really bad
yed: What you are describing would be "partially autonomous." Per Dario Amodei's original statement here: https://www.anthropic.com/news/statement-department-of-war he had no issue with that. "Fully autonomous" specifically means that the AI chooses a target and engages without any human intervention at all. If the human selects or approves a target, and the weapon then automates tracking and engagement, that's still only partially autonomous.
cherioo: We don’t know if Palantir is using claude for those uses. Though anthropic would not know for sure either.I do agree with your point that Amodei is playing a game though. Whether he’s winning the bigger picture or not it’s unclear. His red lines are already so watered out, like how domestic surveillance is not ok, but international? totally fine.
SirensOfTitan: That's true. With the risks of LLMs applied to surveillance though, I think it's a "Caesar's wife must be above suspicion" moment. Association is guilt unless proven otherwise.
jfengel: If they're doing it against the terms of service (and publicly so), I can't pin that one on Anthropic.They've done lots wrong and maybe they shouldn't have gotten in bed with the military to begin with, but this illegal war is not theirs. It rests squarely with the President who declared it. (And with the military officers who are going along with it despite the violation of international law.)
spaghetdefects: I don't think any AI company should get in bed with the military. That being said, if the terms of service have been violated, the account should be canceled.
benlivengood: We have traditional autonomous weapons (and counter-defense). They operate on millisecond or faster timescales with existing RF sensors. They are not and will not be using LLMs or other transformers. Maybe ChatGPT will update some realtime Ada code; they formally verify some of that stuff so maybe that won't be terrifyingly dangerous.Where autonomous transformer-based munitions will be used are basically "here is a photo of a face, find and kill this human" and loitering munitions will take their time analyzing video and then decide to identify and attack a target on their own.EDIT: Or worse: "identify suspicious humans and kill them"
cfloyd: It’s all just theatre. These companies will either give in or die off and be replaced by those who offer more freedom of use. It’s capitalism and while it’s not always pretty, it’s how these things go. Choosing to take what you believe as the moral high ground is noble but it does not put your company ahead of the ball in the long term because there are always those who will use that as an advantage to step on their backs.
collingreen: Capitalism needs laws and regulation in order to not turn itself into feudalism. It isn't naivety or idealism to enforce fair markets and consumer protection. In my opinion it's existential.
stingraycharles: Wasn’t the trigger for all this what happened with Maduro earlier this year? From what I understood, Anthropic wasn’t very happy how their systems were being used by the DoW through Palentir which caused this whole feud.
el_benhameen: I’m not sure that “killer robot” is the actual concern outside of media hyperbole. I’m imagining a loitering munition-type drone that has some kind of targeting package loaded into it with different parameters describing what it should seek and destroy. Instead of waiting for intelligence and using human command to put the munition on target, it hangs out and then engages when it’s certain enough that it’s found something valid.In a world where LLMs produce very convincing but subtly wrong output, this makes me uncomfortable. I get that warfare without AI is in the past now, but war and rules of engagement and AI output etc etc etc all seem fuzzy enough that this is not yet a good call even if you agree with the end goals.
hintymad: Dario himself said that he was against using Claude to build a fully automated weapon because the technology was far from perfect, so he didn't want to hurt our soldiers or innocent people. I think his description matched a killer robot, and I don't agree with his reasoning because it's not like the military researchers didn't have the agency to find out what works and what doesn't.
hendzen: @pg on @sama: "you could parachute him into an island full of cannibals and come back in 5 years and he'd be the king."In retrospect this quote comes across as way more foreboding given what we've learned about the scale of his ambitions and his willingness to lie and bend reality to gain power.Dario on the other hand seems to have an integrity that's particularly rare in this era. I hope he remains strong in the face of the regime.
louiereederson: And they're reportedly back in talks with the DOW per the FT (below).They are not the exception, and are just as bloodlessly, shamelessly publicity hungry as any other tech co, if not more so. No surprise based on their conduct up until this fake event.https://news.ycombinator.com/item?id=47256452
ncallaway: > I’m imagining a loitering munition-type drone that has some kind of targeting package loaded into it with different parameters describing what it should seek and destroy. Instead of waiting for intelligence and using human command to put the munition on target, it hangs out and then engages when it’s certain enough that it’s found something valid.I'm sorry, you've just literally described a "killer robot" in more words.
ProofHouse: Don't be fooled. Dario's 'awe shucks, me' routine and 'but, but, but' is not all that is looks to be on surface.
beepbooptheory: [delayed]
creddit: That he's talking shit about Altman who is, at least in public, only talking up Anthropic. This will only play well with people who hate Altman, which is not the majority or even much of the public. It plays right into Altman's hand who can do what he always does which is play his "smol bean billionaire" role and act like a victim of big bad Amodei.Just because you hate Altman doesn't mean everyone else does! Most people just know him as the guy who makes ChatGPT which most people like.EDIT: Also, it doesn't help to brag about how this is good actually because now they are getting app downloads! People sympathize with victims of unfair situations. They don't like seeing people take advantage of those unfair situations though. No one has ever found the welfare recipient bragging about their welfare to be sympathetic.
madeofpalk: ....why does this make him look bad? That he called out the obvious thing that everyone knows?
creddit: That he's talking shit about Altman who is, at least in public, only talking up Anthropic. This will only play well with people who hate Altman, which is not the majority or even much of the public. It plays right into Altman's hand who can do what he always does which is play his "smol bean billionaire" role and act like a victim of big bad Amodei.Just because you hate Altman doesn't mean everyone else does! Most people just know him as the guy who makes ChatGPT which most people like.
asveikau: I think I'm a bit more of an iconoclast than the average HN reader, but when this community was fawning over him when he was head of YC, I always got the impression, without knowing the guy or much about him, that it was totally undeserved. Mainly because thoughtless fawning of any kind makes me immediately suspicious. Nobody deserves that kind of praise.I read that quote and see no positive interpretation. It was always a negative description.I think maybe this community could use a bit more natural skepticism of hierarchy.
skeptic_ai: So mass surveillance on non us citizens is having integrity?
phendrenad2: [delayed]
virgildotcodes: Dario's full memo - https://pasteboard.co/4Qlmsorrytlk.jpg
mrcwinn: I was recently admonished by dang or dong or whatever his username is for criticizing Sam Altman’s personal character. But I’m here to say again, Sam Altman is a lying sack of sh*t and PG’s partially culpable for allowing a known lunatic to run OpenAI.
oxdgd38: We know how this story will end for Dario. See Oppenheimer, Turing, Lavoisier, Galileo, Socretes etc. Power does not reside in the hands of people with knowledge or even wealth. And most technical people have not taken a political philosophy course or even a philosphy course. The Ring of Gyges story is 4000 years old.
beepbooptheory: I do not believe the Ring of Gyges preceded Plato making it up for The Republic... Where are you getting 4000 years?Also maybe not seeing the message or connection here... That myth isn't really about who has power or not, right? It's kind of just a trite little "why should do good even when no one is watching" thing. It just serves Socrates for his argument with Thrasymachus, and leads us into book 2 where it really gets going with Glaucon and all that. This is from memory so I might be a little off.
oxdgd38: I got it from Tamar Gendlers philosophy and human nature course on open yale courses. She says it was a popular folk story passed down orally much before it was written in a book. Plato used it because people grew up hearing the story.The story is asking whats the source of morality? Who decides where the lines are? And its not scientists. Science produces the Ring.
tkgally: Thanks for posting that link. Interesting reading, especially the closing:“I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with DoW as sketchy or suspicious, and see us as the heroes.... It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees. Due to selection effects, they’re sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees.”
toraway: I have no great love for Dario but his “talking shit” is literally making the point that what Altman is saying publicly is NOT actually in defense or praise of Anthropic and is a calculating, manipulative tactic.Intended to muddy the waters about Anthropic’s actual position vs OpenAI’s, and portray himself as a conciliator (for the audience of DoD/Trump) who is still bound by equally strong ethics (as a fig leaf for OpenAI’s employees sympathetic to Anthropic). All to swoop in a land a big contract from the same people he is making a show of “supporting” in public.I’d be pretty pissed too, tbh. Like, should he instead be thanking Sam effusively for being a manipulative slimeball?
felipeerias: Reportedly, Anthropic didn't know about Claude's role in capturing Maduro until they saw it on the headlines.
sjfaljf: I wonder who asked for these two safety conditions first, DoW or Anthropic. I remember reading earlier that president's family is an early investor in openai, anthropic was winning this year, both companies are on the way to ipo. It could have been a trap, loose-loose situation - drop safety requirements and loose reputation, stay firm on safety - loose contract.
karmasimida: And he is back to Pete hegeseth now? Lollll
lmeyerov: I find it confusing in most directions.Ex: For the above statement, if they're truly dishonest brokers and openly ignore the rules that are inconvenient, they would have zero problems agreeing to Anthropic's terms and then violating them. So what you say may still be quite true, but there would need to be more to it.Ex: DoW officials are stating that they were shocked that their vendor checked in on whether signed contractual safety terms were violated and require a vendor who won't do such a check. But that opens up other confusing oversight questions, eg, instead of a backchannel check, would they have preferred straight to the IG? Or the IG more aggressively checking these things so vendors don't?
_heimdall: I'd have money on OpenAI hiding behind the "all lawful use" phrasing to claim high levels of protection.He also claimed that they would build rules into the model the DoD would use, preventing misuse. Aka he claims OpenAI will quickly solve alignment and build it right in...I wouldn't hold my breath.
aardvarkr: I don’t care who is in the whitehouse. Snowden revealed the crimes of the NSA in 2013 when Obama was president. They’re all going to want to use AI for mass surveillance
3eb7988a1663: I have no idea what exactly Anthropic was offering the DoD, but if there were a LLM product, possible that the existing guardrails prevented the model from executing on the DoDs vision."Find all of the terrorists in this photo", "Which targets should I bomb first?"Possible that even if the DoD wanted to ignore the legal terms, the model itself would not cooperate. DoD required a specially trained product without limitations.
nradov: It's always hilarious watching online fights between tech industry billionaires, sort of like the geek version of UFC. The weirdest part is how regular people pick sides and defend their billionaire against the other guy.
sethops1: The only vibe I get from Altman is that he's a weasel, willing to say anything or burn whatever to get what he wants.
adriand: I think Amodei is widely underestimated. The consensus viewpoint on the deal that OpenAI struck with the Pentagon is that Anthropic got played. I disagree. I'm certain that Amodei and his team gamed this out. In doing so, I think there's at least two conclusions they would have drawn:1. Some other AI company would cut a deal with the Pentagon. There's no world in which all the labs boycott the Pentagon. So who? Choosing Grok would be bad for the US, which is a bad outcome, but Amodei would have discounted that option, because he knows that despite their moral failures, the Pentagon is not stupid and Grok sucks.That leaves Gemini or OpenAI, and I bet they predicted it would be OpenAI. Choosing OpenAI does not harm the republic - say what you will about Altman, ChatGPT is not toxic and it is capable - but it does have the potential to harm OpenAI, which is my second point:2. OpenAI may benefit from this in the short term, and Anthropic may likewise be harmed in the short term, but what about the long game? Here, the strategic benefits to Anthropic in both distancing themselves from the Trump administration and letting OpenAI sully themselves with this association are readily apparent. This is true from a talent retention and attraction standpoint and especially true from a marketing standpoint. Claude has long had much less market share than ChatGPT. In that position, there are plenty of strategic reasons to take a moral/ethical stand like this.What I did not expect, and I would guess Amodei did not either, is that Claude would now be #1 in the app store. The benefits from this stance look to be materializing much more quickly than anyone in favour of his courage might have hoped.
conception: All lawful use. And then they followed up with “intentionally doing illegal things.” If they happen to accidentally do illegal things, OpenAI is ok with it.
aardvarkr: I hate this so much. The nsa’s spying on everyone in 2010 was “legal” and I can only imagine how much worse it is now with AI to follow your digital footprint around everywhere. Too bad we don’t have any more whistleblowers like Snowden
conception: Anthropic has a clear focus on AI safety since inception. The Department of Defense has Monster energy drink esque styled themselves back to the Department of War because “We’re men! We have to prove it so hard!”
neya: Given Gates' current reputation, I don't think this aged well.
derwiki: Why is that weird? If we do that for UFC and other sports
epicprogrammer: It's easy to frame this purely as an ethical battle, but there's a massive financial reality here. Training frontier models requires astronomical amounts of capital, and the DOD is one of the few entities with deep enough pockets to fund the next generation of compute. Anthropic turning down this Pentagon contract over safety disagreements is a huge gamble. They are essentially betting that the enterprise market will reward their 'Constitutional AI' approach enough to offset the billions OpenAI will now make from government defense contracts. OpenAI wants the DOD money while maintaining a consumer-friendly PR sheen; Amodei is just pointing out that they can't have it both ways.
aardvarkr: It’s a $200M contract. That’s not nothing but it’s definitely not such a huge sum for these companies at their scale when they’re spending billions on infrastructure.I’m sure anthropic has signed up more revenue this week in response to this debacle to cover it. Where they’re actually screwed is if the gov follows through and declare anthropic a supply chain risk.
neya: >Dario on the other hand seems to have an integrity that's particularly rare in this era.Anthropic actually partnered up with Palantir. They are not the saints you think they are, either.We should stop worshipping people and companies and stop putting them on pedestals. Just because one party is at fault, doesn't mean the other is automatically innocent. These are all for-profit companies at play here.https://investors.palantir.com/news-details/2024/Anthropic-a...
tmule: Oppenheimer? Really? Quoting a review of an Oppenheimer biography:“Oppenheimer was clearly an enormously charming man, but also a manipulative man and one who made enemies he need not have made. The really horrible things Oppenheimer did as a young man – placing a poisoned apple on the desk of his advisor at Cambridge, attempting to strangle his best friend – and yes, he really did those things – Monk passes off as the result of temporary insanity, a profound but passing psychological disturbance. (There’s no real attempt by Monk to explain Oppenheimer’s attempt to get Linus Pauling’s wife Ava to run off to Mexico with him, which ended the possibility of collaboration with one of the greatest scientists of the twentieth, or any, century.) Certainly the youthful Oppenheimer did go through a period of serious mental illness; but the desire to get his own way, and feelings of enormous frustration with people who prevented him from getting his own way, seem to have been part of his character throughout his life.”Seems more like Sam Altman, who is known to get his way, than Dario.
biffles: It was fascinating to see OpenAI’s gaslighting in action last week. Signing their deal with the DoW and then announcing it so publicly clearly had the goal to (a) portray Anthropic as unreasonable actors that couldn’t come up with a “safe” solution like OpenAI and (b) take away all the leverage Anthropic had in the contract negotiations. Clever (in a Machiavellian sort of way) but still can’t understand why they did it so blatantly — literally hours after Anthropic was designated persona non grata by the government. Clearly this has backfired in a massive way.In a way, I admire Dario’s stance and having the backbone to stand up to a government that is so happy to punish, legally or illegally, those that disagree with them. I certainly wouldn’t have the bravery (or stupidity) in his position — which frankly makes me happy that he’s running Anthropic and not someone like me…
mrandish: > we haven’t donated to TrumpAnother reason is that Sam Altman has been willing to "play ball" like providing high-profile (though meaningless) big announcements Trump likes to tout as successes. For example:> "The Stargate AI data center project worth $500 billion, announced by US President Donald Trump in January 2025, is reportedly running into serious trouble.More than a year after the announcement, the joint venture between OpenAI, Oracle, and Softbank hasn't hired any staff and isn't actively developing any data centers, The Information reports, citing three people involved in the "shelved idea."https://the-decoder.com/stargates-500-billion-ai-infrastruct...
sgustard: Here's the extracted text https://pastebin.com/LS2LpLZ7
freakynit: Is there a term for such a recurring cycle in which speculative bubbles form, institutions and governments collaborate/collude to sustain them, and when the system finally reaches a breaking point the bubble collapses... leaving the public to absorb the losses while those responsible largely walk away with their pay and bonuses intact?
fwipsy: I think the point is that there's potentially a lot more than $200m in defense dollars at stake here, in the future.
oxdgd38: The mistake here is thinking they can take on Power without really sitting in any officual position of Power.Wikileaks and Assange got popular too. What happened to them?The State Dept and CIA do exactly what Assange did. They pick and choose who to target with leaks. They get away with it (mostly even when exposed) because they officially are in power. Assange was not in power. If you take a moral position do it when you have real power.
freakynit: Lol, right? I mean, who even has doubts left anymore on this?The guy can lie with a perfectly straight face. He's the kind of person who tells another lie just to cover the last one, and then another to cover that.Meanwhile he keeps making everyone more and more dependent on him, so by the time people finally realize what's going on, they can't afford to push him out.
derwiki: Lyft was briefly number one ahead of Uber, too
neya: > The weirdest part is how regular people pick sides and defend their billionaireSomeone told me in another comment that it's possibly bot activity. I suspect so too, because in a tech forum like HN, a top voted comment can shift the entire focus/narrative of any given issue. I know there are a lot of mods on here to prevent this sort of thing, but given how good LLMs have gotten, I wonder if we are at a point where humans can even discern cases where this a mix of human and AI involvement in online activity (such as commenting).
fmajid: Or, as is likely, OpenAI models have no guardrails, Anthropic's did and the DoD was bumping into them.
kouteiheika: Help kill people[1]?[1] -- https://edition.cnn.com/videos/business/2020/07/24/thiel-pal...
waterproof: Sam donated $1M to Trump's inaugural fund. Dario did not.http://magamoney.fyi/executives/samuel-h-altman/
hedora: > Choosing Grok would be bad for the USThey chose Grok and OpenAI. The story was drowned out by the Anthropic controversy, but an xAI deal was signed the same week.
fmajid: If you look at his comments about Palantir and their proposed safeguards, it's clear it's a case of "if you are dining with the Devil, you'd better bring a very long spoon"
dmix: Oracle started by building databases for the CIA
ExoticPearTree: Unpopular opinion around here, but no company should have the ability to stop the military from its core mission: killing its adevarsaries through any means necessary.
dmix: The solution is still no different than a decade ago. Far stricter laws on intelligence, federal and local police surveillance, and a reduction in executive power which oversteps checks and balances.There will always be another IT company willing to do integrations even if Palantir dies. Software isn’t going away.
DaedalusII: sama looks like he has been punched in the face hard and is scared of being punched in the face again. he alsodario comes across like a guy who has never even been in a fight and cant believe a fight is even real.there is something very dangerous about a person who believes that they are "good" and then believes that in fact their version of good is superior to the government, and they should ignore the government which ostensibly represents the people, while building a technology that will make millions of white collar jobs go away (democrat voters) and revolutionise violence (dod/dow - republican voters)imagine if IBM decided in 1960s they were going to start telling NASA/DOD how to use their mainframes and saying USgov couldnt have an IBM if they were going to use it in vietnam etcthat said, i use claude
derwiki: What are you having good luck with on Apple Silicon? Or is this more of a statement for when local AI becomes “good enough?”(FWIW I am with you; I haven’t found a local model that works well enough to be a daily driver)
behnamoh: qwen models are basically <opus and >sonnet. 397b runs at Q8 on m3 ultra. for mbp m5 max I'd use the +120b qwen model.
ExoticPearTree: And why would they have an objection to that? They sold a product to a customer. They should have no business in how that customer uses their software.
senectus1: and?Anthropic might not sign up with DoD but they definitely still live in a glass house.Also, its extremely evident that we live in a post truth world. The accusation of Lies dont hold any teeth anymore. Especially in the post law gov of America
fmajid: His clear concern is to stay able to poach OpenAI employees (although it's really Google employees he should be after). He didn't give MAGA $25M like Greg Brockman did, and the Trump administration is pay-to-play, so the DoD contract ship has sailed.
hedora: Yes, Musk is guilty of treason for exactly that reason. He directly sabotaged a major US military operation in Ukraine.However, the military is bound by US and international law. It's clear they're not going to obey either of those with respect to this contract.On top of that, Anthropic has correctly pointed out that the use cases Trump was pushing for are well beyond the current capabilities of any of Anthropic models. Misusing their stuff in the way Trump has been (in violation of the contract) is a war crime, because it has already made major mistakes, targeted civilians, etc.
dota_fanatic: I've heard Palantir is essentially the only federal cloud vendor with this administration for secure services. By "partnered up with Palantir", do you mean they provided their models to the government? Or something more?
ExoticPearTree: I think they saud they will comply with the law and Pentagon policies.And:1. there is no law currently prohibiting autonomous weapons platforms2. the Pentagon can create policies overnight allowing all kinds of stuffSo yeah, OpenAI is going to make a lot of money from avtually doing what the military asks from them.
throwaway290: Any company is free to choose its business partners and set terms to them. "Don't like our terms, don't partner with us"If government can force any private company to work specially for government then US is no better than PRC
estearum: Anthropic doesn't have an issue with their technology "helping kill people," so correct, that would not be hypocritical.
creddit: At least as it's presented in the article, there's no more reason to believe Amodei than there is Altman and Altman is presenting it in a less impassioned way which makes him more believable to anyone who doesn't have in-depth knowledge of the situation.Going "what he's saying is straight up lies" is no more evidence backed than Altman claiming he asked the DoD to have Anthropic given the same deal as OAI and have the SCR designation avoided.
fmajid: Altman was fired by his own board for lying to them. Just because Microsoft blackmailed them into reversing this decision by threatening financial ruin does not change that.You don't give habitual liars the benefit of doubt.
xrd: If I start a small business that sells Apples and the US government comes to me and says "we want to buy your apples and fire them at high speed to" these are now your words "kill adversaries through any means necessary."If I say, no, then am I stopping the military?I feel like it is reasonable that I can say "no, I don't want to sell you my apples."I cannot for the life of me figure out why that means I am stopping the military from killing people. The US Military will definitely still be able to kill people for centuries. I'm just saying I don't want to participate in it.
username223: > pg's sama praise bewilders me. Is there some other Sam Altman he's talking about?Paul Graham was a pudgy mediocrity clever enough to capitalize on nerds' obsession with Lisp, and leverage it into f-you money. Game recognized game in the shape of Sam Altman.
DaedalusII: its reasonable praise. a 19 year old social outcast who grew up in the midwest drops out of ivy league and starts a company before smartphones exist that he sells for $43 million dollars at age 27, then invested almost all the money into more startups, became a billionaire, and hijacked chatgpt from the richest person in the world.its not a comment on his ethics or morality
warkdarrior: Licensing is a thing. See requirements that, for example, GPL3 places on customers.
dolphinscorpion: Grok is chosen because Musk spent $250+ million to elect Trump and is expected to underwrite the 2026 elections. Also, a lot of Trumps and their friends are invested in SpaceX. So they give them money too, but use OpenAI or Claude. I have a feeling that the military likes Claude more
thisisit: Most likely scenario is that if it does something “unlawful” and found out - claim that “These machines are black boxes and they don’t know what went wrong. They will set up an investigative committee and find out.”
shigawire: More likely assumed (perhaps rightfully) that there would be no consequences anyway.
sfink: There's a reason it's unpopular.If your company makes an herbicide that happens to be very good at killing off anyone who drinks it at a high concentration in their water supply, you're saying that there should be no way for your company to resist being used for mass murder (including unavoidable collateral damage)?Also, the core mission of the military is not "killing its adversaries through any means necessary". It is to defend state interests. Some people have a belief that mass killing is the best mechanism for accomplishing that. I do not agree with, nor do I want to associate with, those people. They are morally and objectively wrong. Yes, sometimes killing people is the most effective -- or more likely, the quickest -- way. In practice, it doesn't work very well. The threat of violence is much more powerful than actually committing violence. If you have to resort to the latter, you've usually screwed up and lost the chance to achieve the optimal outcome. It is true that having no restrictions whatsoever on your ability to commit violence is going to be more intimidating, but it also means that you have to maintain that threat constantly for everyone, because nobody has any other reason to give you what you want.The actual military is not evil. Your conception of it is.
solenoid0937: They engage with Palantir for non-domestic purposes.
ExoticPearTree: That is with the Pentagon directly only. Now they will lose much more because no defense contractor, subcontractor and so on can use them for anything defense related (even if they use the model to invent a new type of screw, if that screw is going to be used in anything military).So yeah, they bet a whole lot on “look at us, we have morals”.
hedora: There's no legal basis for blocking defense contractors from using them. Trump's claiming he can do so, but the law doesn't back him up. He'll lose in any fair court, or any corrupt court that values billionaire interests over virtue signaling to the orange one (like the Supreme Court).Also, they got a huge PR win, and jumped to #1 on the Apple App Store. Consumer market share is going to decide which of the AI companies is the market leader, not fickle government contracts.
BLKNSLVR: Let's just not put Dario / Anthropic on an undeserved pedestal. "Well, they're not as bad as Sam / OpenAI" is not, and should not be, much of a compliment.
solenoid0937: Could you please elaborate on why the pedestal is "undeserved" when they are willing to stick up for their principals at the expense of being designated a SCR?Could you point me to one other $300B+ company that would be willing to do this?
xvector: If you actually read the memo they've clearly put in strict terms with Palantir and rejected many of the false "safeguards" offered by the company
JumpCrisscross: For consumer ChatGPT accounts, go to their privacy portal [1] and, first, delete your GPTs, and then, second, delete your account.[1] https://privacy.openai.com/policies?modal=take-control
hedora: They basically are cancelling the contract, but there are some nuances on Anthropic's side. The contract probably has stipulations that prevent them from doing it overnight, so it might be illegal (but ethical) for them to just turn off the API keys.Also, doing that might have bad second order effects with bad ethical implications.For example, when Musk decided to pull the plug on a bunch of starlink terminals, he (intentionally and knowingly) blocked a US-funded attack that would have sunk a big chunk of the Russian navy, which certainly prolonged the Ukraine war. That was clearly an act of treason (illegal).Anyway, just turning off Claude could kill a bunch of civilians in the region or something. It depends on how deeply it's integrated into military logistics at this point.Anyway, your point certainly holds for OpenAI:They walked into a "use ChatGPT for war crimes, and illegal domestic surveillance / 'law enforcement'" deal with open eyes, and pretty obviously lied about it while the deal was being signed. I don't see any ethical nuance that would even partially excuse their actions.
SoftTalker: You might want to read about the War Production Board during World War II. Established by a presidential executive order no less.
genxy: When shit hits the fan they are going to blame AI, but then not even use hand sanitizer. They will 100% be using OAI as a scapegoat, although I'd like to see the OAI goat stay and someone else run into the woods.All Lawful Use is a tautology with fascists because they cannot break laws by definition.
galangalalgol: Does anyone else notice claude is just plain better at reasoning? It may not just be post training guardrails. It would not surprise me of it was something anthropic couldn't simply disable. Either from reinforcement or even training corpus curation. Of all the models, claude is the only one that makes me wonder if they have figured out something beyond stochastic language generation and aren't telling anyone
solenoid0937: I have noticed this too, despite the close benchmark results Claude just works better. It knows when to push back, it has an "agency"... there is something there that I don't see with Gemini or OpenAI's best paid models.
Towaway69: How do I cancel my subscription to the DoW?The bigger picture is that the DoW got what it wanted and it got it by threatening one company while the other did its bidding.
palmotea: >> Unpopular opinion around here, but no company should have the ability to stop the military from its core mission: killing its adevarsaries through any means necessary.> The actual military is not evil. Your conception of it is.You're right, but there's a a real question here: should a company have the ability to control or veto the decisions of the democratically-elected government?To give different hypothetical example: should Microsoft be allowed to put terms in its Windows contracts with the government, stipulating that Windows cannot be used to create or enforce certain tax policy or regulations that Microsoft disagrees with? Windows is all over, and I'm sure pretty much every government process touches Windows at some point, so such a term would have a lot of power.
ExoticPearTree: In the context of the larger discussion, if you already sold apples to the military, you cannot go to them and say you don't like how they're using the apples you sold them.
blueblisters: Wow. Surprising to see open hostilities between the leaders of the big ai labs. The differences appear to not just be competitive but also ideological.
SoftTalker: OpenAI: Is that... legal?DoD: I will make it legal.
throwaway290: Wasn't that for defense during an actual war started by another country?Legit war time measures can be a thing (that's why it's fucked if president can just start a war and then use that as excuse for any war time measures they like)
asey: And yet https://news.ycombinator.com/item?id=47256452
hedora: Usually just "bubble", since it's so common.This one is unusual in that the government started bailing out the AI companies last year. Usually, it waits until the bubble pops, and then starts the bail outs.That's standard operating procedure for Trump though.He did the same thing in 2016-19 with the zero interest rate policy + tax cuts even though the economy was strong. Any macroeconomics book (or NPR station during those years) will tell you that doing that creates short-term economic growth, but sets the next administration up for [hyper-]inflation.Of course, that happened, and those same books go on to say "and, usually, because inflation takes a bit to kick in, the next president will be blamed. This is why we have an independent Fed".So, this time around, he's trying to pull the same crap by dismantling the Fed, and, until then, lean hard into deficit spending to keep unemployment low. Last year, money went to data centers, and domestic paramilitary actions and prison build-outs. This year, we have those things and a new pointless forever war.However, it's not working the same way as it did last time. He's done so much other collateral damage that we're in a "boomcession" where the economic indicators become untethered from reality. So, they show growth, but people's quality of life, spending power, job security, and so on all decrease.For example, a piece of the GDP is "how much does your bank screw you per year on your checking account?". This is treated like discretionary spending, and it's gone up from a few hundred a year to over $2000 in 2025. That increase counts as economic growth, instead of institutionalized theft.Medical spending increases drove all the US's GDP growth last quarter. The quarter before that, it was spending on AI datacenters that's backed by junk loans and federal dollars.Anyway, I don't have an answer for your question better than "bubble", but the current economic cycle is not what you described. It is a "boomcession". As far as I can tell, it's a new class of economic disaster, at least in the US.
DesaiAshu: It's not "just" a $200m contract, it's the start of a lucrative relationship1. Stargate seemed to require a dedicated press conference by the President to achieve funding targets. Why risk that level of politicization if it didn't?2. Greg Brockman donated $25mil to Trump MAGA Super PAC last year. Why risk so much political backlash for a low leverage return of $200m on $25m spent?3. During WW2, military spend shot from 2% to 40% of GDP. The administration is requesting $1.5T military budget for FY2027, up from $0.8T for FY2025. They have made clear in the past 2 months that they plan to use it and are not stopping anytime soonIf you believe "software eats the world" it is reasonable to expect the share of total military spend to be captured by software companies to increase dramatically over the next decade. $100B (10% of capture) is a reasonable possibility for domestic military AI TAM in FY2027 if the spending increase is approved (so far, Republicans have not broken rank with the administration on any meaningful policy)If US military actions continue to accelerate, other countries will also ratchet up military spend - largely on nuclear arsenals and AI drones (France already announced increase of their arsenal). This further increases the addressable TAMGiven the competition and lack of moat in the consumer/enterprise markets, I am not sure that there is a viable path for OpenAI to cover it's losses and fund it's infrastructure ambitions without becoming the preferred AI vendor for a rapidly increasing military budget. The devices bet seems to be the most practical alternative, but there is far more competition both domestically (Apple, Google, Motorola) and globally (Xiaomi, Samsung, Huawei) than there is for military AIHaving run an unprofitable P&L for a decade, I can confidently state that a healthy balance sheet is the only way to maintain and defend one's core values and principles. As the "alignment" folks on the AI industry are likely to learn - the road to hell (aka a heavily militarized world) is oft paved with the best intentions
solenoid0937: You have the right philosophy on the balance sheet side of things, but what you're missing is that researchers are more valuable than any military spend or any datacenter.Dario & co are not as naive as you seem to think they are; these are not starry-eyed naive "alignment people", this is a calculated decision.It does not matter how many hundreds of billions you have - if the 500-1000 top researchers don't want to work for you, you're fucked; and if they do, you will win because these are the people that come up with the step-change improvements in capability. There is no substitute for sheer IQ, you can't buy it (god knows Zuck has tried, and failed to earn their respect) and you can't build it (not yet, at least.)Had Anthropic gone forth with the DoD contract, they would have lost their top crowd. On the other hand, by rejecting it, Anthropic's recruiting just got much easier (and OAI's much harder.)Generally, the military/Pentagon crowd have a somewhat inflated sense of self worth. Very few highly intelligent people with the skills to join these companies want to contribute to the war machine. If OpenAI becomes a glorified military contractor, they will bleed talent. No one with a moral compass wants to work for Palantir, and at this level of in-demand skillset, you have the optionally to choose where you work.Finally, the Anthropic restrictions will last, what, 2 more years? They are being locked out of a narrow subset of usecases (DoD contract work only - vendors can still use it for all other work - Hegseth's reading of SCR is incorrect) and have farmed massive reputation gains for both top talent and the next administration.
teruakohatu: Why?If you have so little faith in them that they won’t honour the privacy controls you should also delete your non-consumer account too.
b112: Consumer market share? Absolutely not.If you look at what generates cash, it's corp to corp. That's across most industries. While there are markets that are consumer mostly, LLMs have immense and enormous business facing revenue potential. The consumer market is a gnat in comparison.
ExoticPearTree: There are always Executive Orders that can enforce that. It is not like in the movies where they will sort stuff out in 2 weeks in a single trial. It is going to take years, and we'll see if Anthropic survives that.
jitl: their revenue went up 4 billion in the week since this story started.
hedora: "Non-domestic purposes" specifically includes wiretapping US citizens and residents, and has for at least 25 years:https://en.wikipedia.org/wiki/NSA_warrantless_surveillance_(...I suspect the 2007 in the title refers to the fact that bills were passed to ban this stuff in 2007, which is when the PRISM program (also illegal domestic surveillance) got started.(The title makes it sound like warrantless surveillance lasted from 2001-2007, but I think it means the article only covers that date range.)
mi_lk: Most people don’t care about this drama and those who care, based on everything I read, this letter will mostly make Anthropic look good / re-establish Sam Altman as a liarBut of course we could live in different bubbles
hedora: It's funny you'd pick IBM:https://en.wikipedia.org/wiki/IBM_and_the_HolocaustThough, I guess IBM did get away with lots of stuff that... Actually, did any supply companies in the WWII German war machine actually get in trouble for war crimes, or did they just go after officers and the people actually working in the camps?The company selling punchcards that were used for logistics was apparently fine. What about the people making the gas canisters, or supplying plumbing fixtures? The plumbers? Where's the line?Wondering, since this is increasingly becoming a current events question instead of an academic concern.
DrSAR: There were the so-called Subsequent Nuremberg Trials (12 of them). Among them were the trials of IG Farben (gas chamber supplies, Zyklon B) and Krupp (armament of the German military forces in preparation of an aggressive war)I'm under no illusion that all the perpetrators of war crimes were held accountable but it's not a bad model.
ExoticPearTree: My conception is that the world would be a much simpler place if war was total. No one would start it unless it would be 200% it could win it. And we would all go through military training just in case, you know, a neighbor drank too much last night and thinks it can win against you.> The threat of violence is much more powerful than actually committing violence.While I agree with this statement, the only way the threat works is if from time to time you apply violence to reinforce your capability and availability to actually do it. And the US is really good at actually being violent so others don't even think about doing something against it, at least the majority of countries anyway.
throwaway173738: More to the point, if everyone stopped selling anything to the military they would still be able to kill people with their bare hands. People are arguably very good at killing people and it takes civilization to train us not to kill each other.
LarsDu88: Greg Brockman donated 25 million dollars, and DoW gives OpenAI 200 million dollar contract.Just good 'ol fashion grifting mixed with a bit of government corruption.This country has been boiling the frog of graft, grifting, and corruption too long.
throwaway173738: The only saving grace is that the killbots had a pre-set kill limit which I exceeded by throwing wave after wave of my own men at them until they simply shut down.
davidw: By voting.
ExoticPearTree: "Legit war time measures" is not a thing. If Congress declares war on Cuba or Venezuale for example, people who do not support it will not see the measures as "legit". The US has a lot of precedent of bombing/invading other countries at the whim of presidents without actually calling it a war for decades.And for better or worse, it is actually good that it is like this. Otherwise, if Congress declares war on Iran or China or whatever, the whole country will be put on a war footing, companies will be directed to build whatever the Pentagon says it needs, drafts will be enforced and so on. And it would be pretty ugly.
trinsic2: IMHO everyone needs to cancel there subscriptions with all of the ai products until stuff blows over. I don't trust anyone in this industry.There is probably one person or one group behind all of these AI companies that just needs to keep the engine going until they figure out how to replace everyone with bots that can do the dirty work.
jitl: there’s a lot of financial incentive to start ur own lab if u can, and invest in as many as u are able
blueblisters: In a broader context, both labs are engaging in "safety theater".Neither know how to solve the alignment problem while market pressures are making them race towards capabilities that will have disastrous consequences (long horizon, continual learning).
sixothree: I'm not so sure Facebook is an apt analogy. Have we forgotten all the times Facebook has actually sold personal data?
nso: * spawn 8 investigative agents
sfink: > You're right, but there's a a real question here: should a company have the ability to control or veto the decisions of the democratically-elected government?I don't think "control or veto" is fair. Anthropic is not trying to prevent the US government from creating full autonomous killbots based on inadequate technology. They are only using contract law to prevent their own stuff from being used in that way.But that aside, my opinion is that to a first order approximation, yes a company should very much be able to have say in its contract negotiations with any party including the government. It's very similar to the draft. I don't believe a draft is ethical until the situation is extreme, and there ought to be tight controls on what it takes to declare the situation to be that extreme. At any other time, nobody should be forced to join the military and shoot people, and corporations (that are made of people) should not be forced to have their product used for shooting people.A corporation is a legal fiction to describe a group of people. Some restrictions can be placed on corporations in exchange for the benefits that come from that legal fiction, but nothing that overrides the rights of its constituent people.Governments are made of people too. Again, a subset of people are given some powers in order to better achieve the will of the people, but with tight controls on those powers to keep the divergence to a minimum. (Of course, people will always find the cracks and loopholes and break out of their constraints, but I'm talking about design not real-world implementation here.)So to look at your hypothetical, first I'd say it's not very different from the question of whether an individual person should be forced to personally enforce tax policy. Normally, I'd say no. There are many situations where the government needs more say and authority in such things, but that must only be achieved via representatives of the people passing laws to allow such authority. Other than that, yes: I believe a company should be able to negotiate whatever contract terms it wants. In a democracy, we are not subjects of a controlling government; the government is an extension of us.In practical terms, if Microsoft were to insist on that contract stipulation, the government would not agree to the contract and would award its business to someone else. If the government were especially out of control and/or unethical, it might punish Microsoft with regulations or declarations of supply chain risk or whatever, but that is clearly overstepping its bounds and ought to be considered illegal if it isn't already. The usual fallback would be that the people would throw the people perpetrating that out on their asses. That's the "democratically-elected part".Obviously, Microsoft would be stupid to insist on such a thing in their contract, and its employees would probably lose all confidence in the corporate leadership. Most likely, they'd leave and start Muckrosaft next door that rapidly develops a similar product and sells it to the government under a reasonable contract.Basically, I'm always going to start from people first, and use organizations and laws only in order to achieve the will of the people. The fact that the people are stupid does make that harder, but the whole point of democracy is that we'll work out the right balance over time.
CamperBob2: The idea isn't that Oppenheimer was a saint, but that the government he served well and faithfully -- some would argue, at the expense of his soul -- turned on him viciously as soon as he dared to question their agenda.
cobbzilla: Secret FISA court decisions are also law, the public just can’t see or challenge them. So we really have no idea what is considered lawful.If the contract says “all lawful use” it’s a blank check to the state.
sixothree: I'd hate to break it to you, but companies do have a right to determine how their products are used. You were subject to that when you wrote that comment. Did you not notice that?
BLKNSLVR: https://time.com/7380854/exclusive-anthropic-drops-flagship-...https://news.ycombinator.com/item?id=47145963Just trying to make sure folks aren't getting ahead of themselves, without having put some custom thought into it.If you want to put them on a pedestal for reasons that make sense to you, all good.If others are encouraged to form their own opinions by taking some pause for thought, then all the better.If Anthropic still end up on the pedestal, it must be for the right reasons, as opposed to 'just because they're not the currently discussed villain'.
cobbzilla: per other Snowden comments, “all lawful use” means whatever we want it to mean.Secret FISA court decisions will say the use is lawful, but you’ll never get to read or challenge those decisions.
sixothree: The problem here is that this department claims its adversaries are Americans. Do you think antropic should aid in the killing of Americans?
throwaway173738: On the other hand military researchers once considered training pigeons to act as torpedo guidance systems by pecking on levers.
sixothree: It's not only single comments, but if you surround people in a sea of opinion, they will definitely start swimming in your direction. Thought, that's probably more important on reddit.
buttercraft: This is wildly dishonest phrasing. A company should be able to set terms in a contract that it is negotiating. The other party can accept those terms, try to negotiate different terms, or walk away from the deal. One party is not "controlling" the other party in a negotiation by definition of the the word "negotiation."
henry2023: please treat this post as a reminder to cancel any subscription to OpenAI and delete your account from their platform.Maybe it’s not much and they probably won’t care but taking no action here it’s the same as being complicit.
sixothree: I'm guessing they believe they will be around longer than this administration.
techpression: They still need a lot of money and what their VC’s think is going to be more important than what Amedei does. Nothing more profitable than war and government.App Store rankings are meaningless, I have Claude, ChatGPT and Gemini all in top five, with a electronic mail app being 1 and a postal tracking service app (for a very small provider) being 3.
internet101010: The value of hyperscalers' equity in Anthropic alone dwarfs their contracts with the government. Not to mention the revenue from hosting their models that helps justify the insane capex. Anthropic going to $0 would be a huge hair cut to all of their balance sheets.
Cantinflas: That's not their mission, in any country, ever.
sfink: In the context of the larger discussion, Anthropic thought of that ahead of time and put the restrictions into the contract that the government agreed to. So "already sold" is a non-sequitur; that's not the situation under discussion.
foltik: > and they should ignore the government which ostensibly represents the peopleBarely represents the people. Especially not on the issue of domestic mass surveillance and fully autonomous killing machines. Or the war in Vietnam.
mcmcmc: So firearms dealers should be fine with their customers going on mass murder sprees?
don_esteban: Did the nsa's spying on everyone change between democratic and republican governments?
retsibsi: It's very easy to adopt a posture of above-it-all cynicism, and to think that anyone who sees an important distinction between two flawed powerful people is a sucker. But it's not particularly smart or sophisticated, and it's not helpful. In politics, the assumption that they're all equally corrupt and sociopathic is exactly what the worst of them want us to default to. In rich-guy PR wars, too, it's only going to work to the benefit of the ones with 0 principles, at the expense of the ones with some principles.
don_esteban: Re: My conception is that the world would be a much simpler place if war was total. No one would start it unless it would be 200% it could win itNow apply the same logic to the current Iran war.
ori_b: Did you vote in the primaries for a candidate that might change it?
jaredklewis: > DoW balked at Anthropic's conditions so OAI's agreement must have made the "conditions" basically unenforceable.I think it’s also possible DoW didn’t care about the conditions but just wanted some pretext to punish Anthropic because Dario isn’t a Trump boot licker like the rest of the SV CEOs.
hn_throwaway_99: Agree with this completely.But besides Sam Altman, this whole episode has made me totally and completely lose all respect for Paul Graham. I used to really idolize pg, and I really used to like his essays, but over the years I've found his essays increasingly displayed a disturbing lack of introspection, like they'd always seem to say that starting a startup is the best thing anyone can do, and if you're not good at startups then you kind of suck.But his continued support of Altman in this instance (see https://x.com/paulg/status/2027908286146875591, and the comment in that thread where he replies "yes") is just so extra disappointing and baffling. First, his big commendation for Altman is that he's doing an AMA? Give me an f'ing break. When someone is a great spin doctor I'm not going to commend them for doing more spinning. It's like he has total blinders on and is unwilling to see how sama's actions in this instance are so disgusting and duplicitous. Maybe subconsciously he knows he's responsible for really launching sama into the public consciousness, so he now just is incapable of seeing the undeniably shitty things sama has done.Oh well, I guess it's just another tech leader from the late 90s/early 00s who has just shown me he's kind of a shitty person like a lot of us.
panta: > Choosing OpenAI does not harm the republicif we consider AIs as "force multipliers" as we do with coding agents, it's easy to see how any AI company can harm the republic if the government they are serving is unethical and amoral.
freakynit: Thank you for such a good and detailed explanation. Loved reading through it. And I like the new word: "boomcession" (not the effects of it tho).
techpression: They’ve only invested a couple of billions, like 20 or so split between them. Not really something that hurts them long or even medium term. Microsoft has multiple multi billion dollar government deals, I think Amazon is the only that doesn’t, Google also has a lot of government contracts, especially outside of cloud.
throwaway290: if you didn't notice we are talking about wwiiusa was not aggressorfat chance congress declaring war of aggression on a peaceful country
devinplatt: FWIW he gives his ethical reasoning on his website:> Broadly, I am supportive of arming democracies with the tools needed to defeat autocracies in the age of AI—I simply don’t think there is any other way. But we cannot ignore the potential for abuse of these technologies by democratic governments themselves. Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies. Thus, we should arm democracies with AI, but we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.Basically, he's afraid that not arming the government with AI puts it at a disadvantage vs. other governments he trusts less. Plus, if Anthropic is in the loop that gives them the chance to steer the direction of things a bit (what they were kicked out for doing).It's not the purest ethical argument, but I also would not say that there is a clearly correct answer.
dev_l1x_be: Are you arguing against free market capitalism in favor of fascism? If OpenAI needs billions of taxpayers money to survive then should that project exist? Why?
neya: Basically he's asking everyone to trust him that he won't cross the line himself. Whatever argument he makes for democracies applies to him as well, and he's not somehow above it. That's the flaw in his argument.Brutally honest, to me it just sounds like a very elaborate way to say "trust me, bro"
vhiremath4: This is an interesting perspective. What happens if there is a large global war? Do researchers who were previously against working with the DoD end up flipping out of duty? Does the war budget go up? Does the DoD decide to lift any ban on Anthropic for the sake of getting the best model and does Anthropic warm its stance on not working with autonomous weapons systems?I don’t know the answers to these questions, but if the answer is “yes” to at least 1 or 2, then I think the equation flips quite a bit. This is what I’m seeing in the world right now, and it’s disconcerting:1. Ukraine and Russia have been in a skirmish that has been drawn out much longer than I would guess most people would have guessed. This has created a divide in political allegiance within the United States and Europe.2. We captured the leader of Venezuela. Cuba is now scared they are next.3. We just bombed Iran and killed their supreme leader.4. China and the US are, of course, in a massive economic race for world power supremacy. The tensions have been steadily rising, and they are now feeling the pressure of oil exports from Iran grinding to a halt.5. The past couple days Macron has been trying to quell tension between Israel and Lebanon.I really do not hope we are not headed into war. I hope the fact that we all have nukes and rely on each others’ supply chains deters one. But man does it feel like the odds are increasing in favor of one, and man does that seem to throw a wrench in this whole thing with Anthropic vs. OpenAI.
n6hdhf: More like they will feed machine bullshit like WMDs exist in Fiji. My gut says so. My mom always believes me. Machine will call it out. Then they want overide. Machine will log it. Then they want an erase log button etc. Institutions and rules didnt fall from the sky. It evolved to damp the damage caused by such behavior.
ithkuil: If Congress declared an actual war and if they declared to use war time laws to force a private company to comply with the war effort, we wouldn't be having this conversation.What happened was different: a private company decided to enforce some terms, as they can do during peace time and they have been bullied in a way that is disgraceful precisely because it didn't happen during war time nor it has been done using the existing laws around that.What is the purpose of having laws in the first place if we accept that the government can rule by intimidation?
ExoticPearTree: No, I do not think they do. If a buy a car a run somebody over on purpose, the manufacturer has no right to come take my car away. Even if it were to be written in a contract.
saghm: With the way you've phrased it the government could nuke the entire world; all of the adversaries would be dead along with literally everyone else. I don't really see why it's an issue if a company doesn't want to sell them the tools to do that.
mbix77: But now he's anyway at the table with them? Bullshitters all around. Fully open-source models are the only way.
tdeck: > It's easy to frame this purely as an ethical battle, but there's a massive financial reality here.As opposed to all those famous ethical battles where there's nothing in it for you to do the wrong thing?
toraway: Based on OP's comment history, 50/50 chance AI wrote that...
ExoticPearTree: Is this a rethoric question?
jrflowers: What?
gottorf: > that's probably more important on reddit.I don't know if you've noticed, but HN has been full of Reddit-tier comments, most especially around hot-button political topics, for a while now.
felipeerias: It’s a bit more complex than that, but to be fair I don’t know what they were expecting after they integrated a purpose-built model with Palantir to be deployed in high-security networks to carry out classified tasks.
ExoticPearTree: I don’t believe for a second the Pentagon sees Americans as adversaries.
ExoticPearTree: I do not see Iran winning this. The current government is also hated by the people who would very much like to see all of them dead.Al Jazeera has some very good insights into this, and the gist of it is: the Iranian regime is in a fight for its life with nothing to lose. If they are degraded enough, a revolution will start in Iran and they will be killed by the people. Or by US/IL bombs - whichever comes first. There is no way they get out of this alive. They are trying to prolong the inevitable.
don_esteban: Did democrats offer primaries in the last elections?Did voting for Bernie Sanders in the last two primaries (especially the ones when Trump won for the first time) amount to anything?I wonder how long can the American public keep the self delusion that the elections are anything but a theater for the naive, to keep the pretense the public has any say in things that matter.How much has the current administration asked the public about going to war with Iran?
nickysielicki: > researchers are more valuable than any military spend or any datacenter. It does not matter how many hundreds of billions you have - if the 500-1000 top researchers don't want to work for you, you're fucked; and if they do, you will win because these are the people that come up with the step-change improvements in capability.This is a massive cope imo. The reason that the AI industry is so incestuous is just because there are only a handful of frontier labs with the compute/capital to run large training clusters.Most of the improvements that we’ve seen in the past 3 years are due to significantly better hardware and software, just boring and straightforward engineering work, not brilliant model architecture improvements. We are running transformers from 2017. The brilliant researchers at the frontier labs have not produced a successor architecture in nearly a decade of trying. That’s not what winning on research looks like.Have there been some step-change improvements? Sure. But by far the biggest improvement can be attributed to training bigger models on more badass hardware, and hardware availability to serve it cheaply. To act like the DoD isn’t going to be able to stand up pytorch or vllm and get a decent result is hilarious: the reason you use slurm and MPI and openshmem is because national labs and DoD were using it first. NCCL is just gpu accelerated scope-reduced MPI. nvshmem is just gpu accelerated scope-reduced openshmem.If anything, DoD doesn’t have the inference throughput requirements that the unicorns have and might just be able to immediately outperform them by training a massive dense model without optimizing for time to first token or throughput.The government invented HPC, it’s their world and you’re just playing in it.
Tanjreeve: AI doesn't add anything to the ability to do mass surveillance. That genie was already out of the bottle from clouds and big data systems. At best AI might take on some of the gruntwork for drawing conclusions from profiles but it's doing it's usual thing of being a powerful interface built on top of other systems.
wilg: https://en.wikipedia.org/wiki/2020_Democratic_Party_presiden...https://en.wikipedia.org/wiki/2024_Democratic_Party_presiden...Skill issue. Run your candidate. Convince people to vote for them.> How much has the current administration asked the public about going to war with Iran?THE ELECTIONS are how the public weighs in.
don_esteban: OK, slowly:The wars are already total for the weaker sides. See Ukraine/Iran. Did not stop the stronger side attacking.You are advocating for no constraints (total war) on the stronger side. Taken literally, that means genocide of the losers. Really, that's what you want?But yes, you are right, the world would be much simpler in such case - there will be no humans left. OK, maybe some hunter-gatherers.
thaumasiotes: > You are advocating for no constraints (total war) on the stronger side. Taken literally, that means genocide of the losers. Really, that's what you want?Taken literally, it means genocide of the losers is an option the winning side has. It always has been.Note that Genghis Khan's explicit plan when he conquered China was to wipe out the Chinese to make room for Mongols. He wasn't stopped from doing that; there was no constraint to block him.But he was persuaded not to.
vanillameow: I would agree if not for the fact that they just let a $200M contract slip through over it. You could argue it's "safety theater" in itself but that seems like a risky gambit especially with this administration. I definitely trust Anthropic more than OpenAI. In fact I'd go as far as to say it's probably pretty imperative that Anthropic stays a frontrunner in this race and doesn't leave the field exclusively to OAI (and maybe Google which is just as bad). That doesn't mean I'm exactly happy with Anthropic's comments like "mass surveillance bad but only for the US". But Anthropic at least regularly asks questions about the direction of AI development. I haven't seen the other frontier model companies do any such thing.
don_esteban: Regarding Iran's future:You are describing Libya scenario, not a 'lived prosperously ever after'. There is no credible opposition in Iran to take the mantle.
guitheengineer: that is considering if there will be elections, which many people don't believe it's the case.reminder that trump has been flirting with just continuing in power (2028 hats and talks about a third term) and is responsible for trying a coup last time he lost.personally I think there's a possibility where he'll just declare martial law and stay in power at the end of his term.
Tanjreeve: This is the same mistake as made in Iraq and Syria by media policy pundits. Dictatorial regimes collapse pretty quickly without a significant base of support enough to stop a revolution happening. They might not have a majority of people supporting but it isn't a democracy. Dictatorial regimes will always have one or more of military, business, or sub-groups of citizens in their pockets as clients.Whenever we say "the regime is hated by it's people it will collapse" it should be asked "then why didn't it collapse already?". In Iran metropolitan areas are where you see opposition. That's also where people have cameras and media orgs tend to be. We get a warped depiction of opposition in Iran even without our own media's baggage. Meanwhile the power base of Iran is everywhere but metropolitan cities. And there's a lot of clients who benefit from the regime. I think this might be worse than the sectarian violence that came out of the Hussein regimes collapse because the Sunni sect his base was built around was still a minority. This time it's the majority and the people being fought against are the Americans, the Israelis and the Arabs so their backs are against the wall this is a total war already from their side.
don_esteban: Re: Skill issue Money issue. This is not level playing ground, the field is severely tilted. The referee is bought.But you are saying: You lost fair and square, wait 4 years to have any say in what is going on.Re: THE ELECTIONS are how the public weighs in.When the choice is between Tweedledee and Tweedledum, the public's choice is meaningless.To say nothing about politicians outright shamelessly lying (e.g. Trump campaigning on 'no more wars').
wasabi991011: > Did voting for Bernie Sanders in the last two primaries (especially the ones when Trump won for the first time) amount to anything?He didn't win the primaries though. It would have amounted to something if he got enough votes.
vintagedave: I canceled my ChatGPT subscription today, and send support@openai.com a polite email saying why.I encourage you to do the same.Claude Desktop is better anyway -- and, as we have seen, Anthropic is a more ethical company.
raincole: Voting changes the name of the department. It doesn't change if the government wants mass surveillance.See PRISM.
juleiie: Man had his flaws well, who didn’t do stupid things when young. Throw a stone at meOne time I almost deceased some random person when I was like 19 years old. Police even came but police is frankly really stupid on local level.
DaedalusII: yeah man what a mediocre loser all he did was create the first cloud based ecommerce platform , sell it to yahoo for $49m in the 90s, then co-found the most successful early stage VC firm of all time, which made THIS forum which you are using to attack himlol
don_esteban: 1) He did not win primaries, in significant part also because DNC was heavily against him. The level playing field thing.2) If he won the primaries, there is still no guarantee that that would have amounted to anything.First, he might not have won the elections (mainstream media and the whole ruling elites were heavily against him). And even if he won, he might not have been able to do much against the permanent state.I still think the main cause of Trump's wins is the deep disillusionment of the democratic voters by Obama's failure (inability/unwillingness) to impact a meaningful change.
delaminator: Here's a simple ubsubscribe guidehttps://usa.gov/renounce-lose-citizenship
juleiie: There are no ‘good’ companies but I like anthropic little bit more than the others as of this moment right now.Would buy their stock, would sell OpenAI, maaybe. If it was public. Maybe instead of MSFT and AMZN I bought
kakacik: Is your original question rhetoric? Because it ain't very... smart
vkou: No.... But the government flooding cities with thousands of masked thugs with a license to do whatever they want... has so far been an entirely Republican thing.There are more colours to the world than pure black and pure white. There are also a million shades of grey in between, and most of us have the ability to distinguish between them.
generic92034: > If you take a moral position do it when you have real power.If the condition for getting real power is having no morals, this is hard to accomplish.
Havoc: OpenAI never had a strong brand around truthfulness
lukan: "Too bad we don’t have any more whistleblowers like Snowden"Probably because most don't want to end up in russia?
navaed01: To play devils advocate - why should a government supplier and private company (Anthropic) and 1 man there - get to decide what an organization with elected officials can and can’t do? Dario has no idea of threats facing the US and where national security needs to go. Dario has personal views on weapons and surveillance- that’s fine but national defense tactics by their nature is something many people are uncomfortable with.
gck1: Claude is Anthropic's property which they rent to the government. Is there any other place where rental agreements don't come with clauses on how the property can and can't be used?
hyttioaoa: He doesn't get to decide that. But he can decide what he wants to do, just as you can decide what to do with your time and resources.Also, that very much sounds like the government knows best and citizens should just trust it unconditionally.
mentalgear: Reminder that European LLM companies like Mistral's "LeChat" are also now really good!
taurath: What does $200m mean for someone who thinks a trillion in revenue is likely among AI companies in the next 5 years? Which is a real quote.
digitalPhonix: You have a car (I assume, replace “car” with whatever else if not).The government asks if they can rent your car. I hope we agree that you don’t have to say yes. (Specific exceptions exist to places of lodging etc.)Anthropic is exercising their right to say no in the same way.
pton_xd: And now you've got people on here saying, we'll actually Palantir ain't so bad, you see! It's difficult to keep up with.
delaminator: Yeah, here's some examples of all these fascists doing exactly that:Soviet Union - The show trials of the 1930s were conducted with full legal apparatus: confessions, judges, verdicts. Stalin's purges operated through legally constituted troikas. Entirely "lawful" by Soviet law.East Germany (DDR) - The Stasi's surveillance and harassment programmes were codified in law. When the wall fell, many Stasi officers genuinely argued their conduct was legal under GDR statute: a defence that West German courts largely rejected.Castro's Cuba - Mass executions after the revolution were conducted by legally constituted revolutionary tribunals. Castro explicitly defended this on legality grounds when challenged by foreign press in 1959.Chavez/Maduro's Venezuela - Suppression of opposition media, jailing of political opponents was consistently defended as operating within Venezuelan law, which was progressively rewritten to make it so. Classic self-referential legality.Mao's Cultural Revolution - The revolutionary committees had legal standing. Persecution of intellectuals and landlords proceeded through formal (if kangaroo) legal processes.
oscaracso: You should ask the language model that output this text the definition of 'whataboutism,' and if the comment you've posted responds meaningfully to the discussion at hand.
xvector: There is also:3. Talent migration to Anthropic. No serious researcher working towards AGI will want it to be in the hands of OpenAI anymore. They are all asking themselves: "do I trust Sam or Dario more with AGI/ASI?" and are finding the former lacking.It is already telling that Anthropic's models outperform OAI's with half the headcount and a fraction of the funding.
kelnos: I think that's wishful thinking. Just because someone is a "serious" researcher (careful, sounds like a No True Scotsman coming up), it doesn't mean that they care about AI guardrails or safety, or think our current administration is immoral.
mellosouls: Full text claimed on Reddit here:https://www.reddit.com/r/Anthropic/comments/1rl1ula/dario_tr...
delaminator: you should ask the GP about his use of the word fascist on everything he doesn't like.> if the comment you've posted responds meaningfully to the discussion at hand.https://mirror.org/
Imustaskforhelp: OAI employees, please talk.~93 Employees signed up the notdivided.org petition. Some of OAI employees could be reading this comment right now.Let's be real, OpenAI backstabbed Anthropic. Even Dario has essentially just said it now.(Shameless plug?) but I created an ASK HN about it: Ask HN: What will OpenAI employees do now who have signed notdividedorg petition [0] and not a single person from OAI responded when I just wanted to discuss :/ and hey that's okay I don't mind but please don't mind me when I re-raise this topicFrom a comment from the thread about OAI on hackernews by tedsanders (OAI employee) [Please don't harass anybody]> I'm an OpenAI employee and I'll go out on a limb with a public comment. I agree AI shouldn't be used for mass surveillance or autonomous weapons. I also think Anthropic has been treated terribly and has acted admirably. My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons, and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples). Given this understanding, I don't see why I should quit. If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit, but so far I haven't seen any evidence that's the case.Ted, if you are reading this, I truly felt like you were right. I was still skeptic because part of me felt like it doesn't make sense and well it didn't. But I had trusted ya and I thought that you had far greater insights than us but now I am not sure...Sir, I have no ill-will towards you but I just want to know, you have gone silent after this comment and one another about GPT 5.3 instant as far as I can see. You did say in the first that you will go out on a limb with public comment, so please don't mind me if I ask questions in public about that commentThe question is: But what now?If this is what an OAI employee is saying, weren't they deceived too? weren't they humiliated in public by being proven wrong, losing their accountability/trust within a community?The comments just turn to well money speaks, I agree, but does money speak so much that you cannot hear your peers/own community?I still believe on the fringe thought that OAI employees have some say in all of this. 98 employees (no of employees who signed notdivided.org) leaving have 1000 fold more magnitude than 98 people not using OAI. You have power, and with it comes responsibility.I just want a discussion with OpenAI Employees in general / especially with those who signed NotDivided.org or who are part of this community of hackernews like ted. what do YOU guys make up of all the situation?A lot of this situation if historians ever write about it, would feel so close to "I was just following orders" than not. No sadly this is not hyperbole now because what we are talking about is the creation of autonomous killing machines which can kill anyone without any human in the loop.People from the future are also gonna ask us general public why we didn't held the people working accountable, in a similar fashion as to the past.Once again, I still mean to bring no hate towards anyone. Make peace not war. I just want to think that the world would be a better place for my future children and generation and I would like to hope that this comment can be meaningful towards it.Have a nice day as one can in a situation like this. A lot of the things I say or do is the same things I asked the people of past when reading history in my classes, Why didn't you guys do X or Y, Why didn't the public say anything. Why was it silent? But we are gonna be history too and someone is gonna ask us why were we silent and I just want to make the answer I tried rather than I don't know. I sort of wanted to learn something from history.Sincerely, We (the public) want a discussion with OpenAI employees about it. Please don't be silent as silence will be interpreted by the future generations as agreement. Please speak. Tell us what you all are doingA lot of the times it feels like I am shouting in the void tho in these matters as these messages just straight up don't go to the right people and that feeling sucks because at some point, I am gonna get tired shouting in the void too.If anyone also has contacts with OAI employees, please ask them such questions and share us the responses if possible. I just want some answers, that's all.[0]: Ask HN: What will OpenAI employees do now who have signed notdividedorg petition: https://news.ycombinator.com/item?id=47231498
jahnu: Billions of dollars is a hell of a drug.
qwertox: @sama's did say: "[..] will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control". Law is what Trump decides.
abustamam: Unless we move out of the country though, we are still technically subscribed to the DoW (still need to pay taxes etc)
abustamam: I think similar to how AI-generated comments are frowned upon, "this comment was generated by Ai" comments should also be frowned upon. It's really annoying to see a well written comment and replies that don't address the comment but just accuse the poster of having used Ai to generate the comment.
utopiah: > if you like that talent to work for you at some point.They might not if they think everybody who stayed after Sam Altman was reinstated might be excellent technically speaking yet not have the culture they want, which seems to be the case with all the recent communication.
andy_ppp: "not consistently candid in his communications"I've now moved to Claude and it's much better actually, if like me you hate their fonts (Anthropic Sans) select System fonts in the Claude preferences and you can use this snippet in Safari's Settings -> Advanced -> Stylesheet to make everything your default system font:[data-theme=claude] * { font-family: system-ui, sans-serif !important; }
sebastiennight: > you should ask the GP about his use of the word fascist on everything he doesn't like.If mirror dot org actually existed, you might want to look into it, because your long list of examples has one related to 1930s Germany, and the rest has nothing to do with the political definition of "fascism"?
gadders: Especially if you ask Altman's sister (allegedly)
DaedalusII: well yeah but should Smith and Wesson just be like man the US Gov doesnt align with our corporate values so we are just gonna start selling belt fed machine guns to civiliansabsurd yes but same principle. companies have to be subject to government especially in technologies that enable or manage violence. this is because the role of the government is to collectively manage and allocate violence in the manner the people desire
foltik: > companies have to be subject to government especially in technologies that enable or manage violence. this is because the role of the government is to collectively manage and allocate violence in the manner the people desireI don’t know what you’re describing, but it’s not how the US works.Companies aren’t extensions of the state; they’re private actors that have to follow the law. If Congress wants something prohibited, it passes a law. Otherwise firms are free to choose who they do business with.
DaedalusII: agencies create many regulations which affect companies, with no input from congress. the ATF in 2017 banned 'bump stocks' by reclassifying them as machineguns with zero input from congress.companies and the people who work for them are subject to the state via the law and regulations. if they violate the law, the state will use violence to enforce the law, with a government entity called law enforcement and law enforcement officers.if new technologies are invented, like the internet, missiles, nuclear power, and so on, which represent an ability to manage and allocate violence, or remove the state ability to control violence, the government needs to reassert their monopoly on that violence and take control of it. without this monopoly, how will they collect taxes and enforce the law?without the monopoly on violence the government is little more than an idea
foltik: You’re kind of losing the plot here. The original topic was whether Anthropic can set conditions on a contract with the government, which they obviously can. If Anthropic says “we won’t sell this unless you agree to X safeguards,” the DoD can either accept the terms or buy from someone else.
kotaKat: Sam a manipulative liar? Say it isn't so!Not the first time, not the last time, add it to the list of shit he's done that should put him in a little cell for the rest of his life.
sinity: Twitter morons wasn't referring to OpenAI employees, I think.
Imustaskforhelp: I feel so sad about snowden sometimes. I tried reading his book's first few pages on how when he was growing up, he could be anyone in a forum and there was this sense of anonymity and at the same time, just freedom. And later on when he saw just how much the overreach of govt. was etc., he did what others couldn't.It wasn't as if there weren't any other contractors like Snowden, but there were no other whistleblowers like Snowdenand where'd that leave him? In a country far away from his motherland and being worried about his safety. Being called god knows what by the country at home and most general people don't even care.Snowden didn't do it for the money, he did it for what he felt was right and that's so rare.Its so sad how when I searched up on Snowden on youtube, the first thing I found was ex CIA agent claiming Snowden wasn't innocent and how he had to befriend russia but at the same time, that was only because US would have literally killed him and made an example out of him to whistleblow about such a large-scale mass surveillance“What kind of asshole reveals the fact we’re the assholes, then doesn’t let us kill him!” is one heck of a comment I found.Also, We will charge the whistleblower with death but we will not take any action against the act which was whistleblown in the first place (:
creddit: Enough people cares to flip the ranking of the App Store downloads.
epsteingpt: Sour Grapes
mikkupikku: Nothing deep going on there. Fascism in modern informal parlance is a synonym for authoritarianism. Those who object most loudly to Stalin being called a fascist are usually themselves actually fascists, or stalinists. Everybody else gets it.
cousin_it: Did you use AI to extract the text? It rephrased the text along the way, I'm too lazy to point out all the differences, but if you for example search for the word "suspicious" (which is in the image but not in the extracted text) you should start to get suspicious yourself.
andy_ppp: How much better is an LLM at mass surveillance? Obviously RAG with everyone’s details in it is useful but it’s also likely prone to hallucinations. I’m not sure LLMs are the right AI for even finding patterns in such data. As for letting LLMs autonomously kill people they clearly won’t be ready for that any time soon.Does the administration really believe these AIs are like digital humans?
thegreatpeter: Didn’t they choose Anthropic first and then all of this happened so they were forced to go with Grok?Not adding up
JumpCrisscross: > How much has the current administration asked the public about going to war with IranHere is the 2026 Senate map [1]. Do you suggest any of them will flip over Iran? (I don’t.)[1] https://en.wikipedia.org/wiki/2026_United_States_Senate_elec...
Departed7405: I agree. What people forget is Snowden didn't intend to end up in Russia. He wanted to go from Hong Kong (where he tought he would be safe, but realised extradition still was an option) to Ecuador. But he feared US would intercept his plane if he went over US/US allies sky. So his plan was to go from HK to Russia, then to Cuba and finally Ecuador.Russia stopped him because US had cancelled his passport.
throwaway911282: you must be delirious to think anthropic is an ethical company! they are the ones to first partner with the govt.
wsng: It's different with services. If you close a mobile phone contract and use it for spamming, the supplier can cancel your contract.
pkaral: There are various views on Dario's communication style etc. But in this, he is the guy we have been waiting for. The guy who is willing to say: "We built this company for America, and we're going to do what we believe is right for America, regardless of short term expediency." Also called having a spine.He has my respect for that
pbiggar: One issue is that he is indeed saying this is for America. But, like, there are many people in the world (most, even) who are not American. If companies are willing to throw non-American people under the bus, that's neither ethical nor good business practices.
le-mark: There are certainly black budget dollars at stake as well, which are much more lucrative.
Symmetry: Is this the first time internal communications like this have leaked from Anthropic? It'll be unfortunate if Anthropic can't have honest conversations internally going forward for fear of leaks.
ekjhgkejhgk: > Who in their right mind would give the benefit of the doubt?I'm saying that we should give Anthropic the benefit of the doubt that when they say "our deal with Palantir doesn't cross our red line", we should believe Anthropic, that they have gotten an assurance from Palantir that they wouldn't use it domestically. I'm NOT saying we should give Palantir the benefit of the doubt.I wasn't commenting on "is giving AI to Palantir a good idea" (I don't think it is), I was commenting on "should we conclude that Anthropic is being dishonest because they claimed they have red lines but work with Palantir" (I think it's unclear, but there's a plausible explanation in which they're not being dishonest, but possibly naive, so give them the benefit of the doubt).
ethbr1: You're confusing physical goods transactions with subscription access to a service.One of the many reasons every company has tried to shift their business model to the latter: greater control over users.
SirensOfTitan: Right. But this is about Anthropic -- a company frames itself as a responsible and ethical steward of LLM technology. They can't pretend that OpenAI is somehow morally bankrupt here while continuing to deal with companies that undermine peoples' civil liberties.I'm also a little unsure what you're saying here. Are you saying that it's futile to rely on corporate leaders to commit to ethical acts, as there's always someone else who will debase themselves to make money? I think that solely relying on the state to regulate itself with respect to civil liberties is a fast path to despotism. The well-regulated state was always a partnership between ordinary people bravely standing up for their rights and the norms of the rules and laws that made it socially acceptable to do so.If I'm grasping you correctly, I think you're right; however, this points to the rottenness of our culture's way of organizing labor: the optimization of the shareholder over everyone else leads to some really awful effects.
xvector: I wish people like you would actually talk to people at Anthropic, maybe interview with the company, actually engage with the real humans there before making blithe comments like this.Seriously, you're on HN, you can't possibly be that many degrees removed from someone at the company.In any case it's absolutely not "just marketing", it suffuses their whole culture, and it is genuine.
freejazz: Which flavor is the kool-aid? I wish people like you were less credulous.
squidbeak: I don't - idealistic motives seems to be common among leading AI developers and researchers. It's totally realistic that Anthropic sticking to principle & taking a hit for it will give it an edge recruiting those idealistic types.
ekjhgkejhgk: > but my main worry is how to make sure it doesn’t work on OpenAI employees.It's difficult to get someone to understand something when their paycheck depends on their not understanding.
vasco: Alignment is with the user of the LLM not to some fuzzy interpretation of human rights. So solving alignment for the DoW is just "don't refuse to bomb people when I ask you".
delaminator: "We use different definitions now, fascism is things I don't like."
_heimdall: That's absolutely not the definition people use for alignment. Safety discussions often circle around alignment because they are worried about AI doing things that are bad for humanity as a whole, not because it goes off track from any one user's goal. That would be terrible for safety if alignment meant I could ask to hack tha TSA and the LLM would do it.Ignoring the definition, what would be required for individual alignment is exactly the same as collective alignment. The only difference is the goals and who writes them, for the LLM it is being somehow forced to follow those rules no matter what.
davidw: This is my Senator:https://www.wyden.senate.gov/issues/domestic-surveillance-re...He may not be perfect on everything, but elect more people like him and it starts moving the needle. Or elect some more that are even more opposed to some of these things. It doesn't happen overnight. Change is difficult.
causal: Yeah he has some great essays but also some that I find really dumb. Reading “Founder Mode” is when I realized he’s just as susceptible to fallacy as the rest of us.
hollosi: Enforcement is the real issue, not the specific red lines, regardless of what Anthropic claims and news outlets repeat.Verification requires access to classified logs. These logs would attract the spies of the whole world. Even if these logs are in principle for "past actions", in practice past logs (for war games, for example) would compromise future strategy.Since these manual audits are too risky, the only alternative is to hard-code limits into the AI. But are we ready trust an AI to "judge" a mission and refuse to execute during a crisis?Anthropic wanted technical enforcement, the Pentagon wanted trust.It’s a choice between two bad options: an unaccountable military and an unreliable AI kill switch. They are both very dangerous, just in different ways.
ipunchghosts: > but there's a massive financial reality here.Not a chance. The DoD has massive pockers which and INCREDIBLY SPREAD OUT. You can't underestimate how spread this money is. The DoD has maybe a 64 GPU cluster and ALMOST NO ONE USES IT FOR DEEP MODEL TRAINING. Even contractors end up working with DGX boxes to do all their training.As of 2023, I was doing the largest Deep learning training runs out of anyone I have known in the industry and I've been in the industry for 20 yeras. The second best groups behind mine were using 4 GPU locally machines that they had to purchase on contract.There's no way the DoD can train these models themselves, not even close. They are COMPLETELY DEPENDENT ON INDUSTRY. I was the PM for a DARPA program in 2023 and SAME PROBLEM. They had no compute or would rely on university compute if a program had a university partner. YOU HAVE NO IDEA HOW FAR BEHIND THE DOD IS IN THIS SPACE.
dirasieb: assuming the worst from the government IS how you set up good legal frameworks to limit government's power, have you ever heard of the first and second amendments?
msabalau: It's a bit simplistic to personify complex organizations of millions of people like "The Government" or "The Market" as if they were a living, breathing persons with a single mind.There were people working in government who successfully attacked Oppenheimer for personal and/or policy reasons, people who stood by, and people who unsuccessfully supported him, voted to clear him, or condemned the proceedings.Oppenheimer still paid the price, and arguably, the risks to someone like him today are considerably higher, as the current administration isn't exactly like Eisnehower's.Nevertheless it's reductionist, reifying sentimentality to talk about "the government" turning "viciously" on someone who "served them well" because they are defying its agenda. The government isn't a character in Game of Thrones. The responsibility lies with the specific individuals who attacked him, and those who stood by.
mmooss: > Change is difficult.I agree, though notice that the GOP/MAGA have and continue to make enormous changes. The difference is that they believe they can do it while others sit around talking about hopelessness and powerlessness. The only difference is belief.
solenoid0937: Sure the architecture is from 2017. But the gap between GPT-1 and frontier models today is not simply "more FLOPs" and as simple as "standing up PyTorch and vllm" - theres thousands of undocumented decisions about data, alignment, reward modeling, training stability, and inference-time strategies, and lots of tribal knowledge held by a small group of people who overwhelmingly do not want to work on weapons systems.The dense model argument is self-defeating long term. Sparsity (MoE etc.) lets you build a smarter model at the same compute budget, so going dense because you can afford to waste FLOPs is how you fall behind b/c you never came up with the step function improvements needed.Sure, the DoD invented HPC, but also invented the internet, and then the private sector made it actually useful.
veidr: Nobody gives a shit about jumping to #1 in the app stores, at this scale.If USA really goes full-Huawei on Anthropic, they can't IPO. It's an existential crisis for them. I think they can survive in some form, somehow, because their model is really good, probably the best.And in other times, I would think the US government had sufficient intellectual horsepower to not cut off its own dick, and the golden goose's head, over some idiotic morning-drinker road-rage type beef. But these are not other times. These are these times.
filoeleven: It's not the department of war. Don't call it that to appease the toddler in chief.> However, only an act of Congress can legally and formally change the department's name and secretary's title, so "Department of Defense" and "secretary of defense" remain legally official.https://en.wikipedia.org/wiki/United_States_Department_of_De...
xvector: Just have an actual, good faith conversation with a real human working there instead of fighting/making assumptions about a strawman in your head.
filoeleven: It's not the department of war. Don't call it that.> However, only an act of Congress can legally and formally change the department's name and secretary's title, so "Department of Defense" and "secretary of defense" remain legally official.https://en.wikipedia.org/wiki/United_States_Department_of_De...
veidr: crazy takelike saying kids having internet-connected devices with built-in cameras doesn't increase the probability of sexting, they could do the same with film cameras and a fax machine
Tanjreeve: AI doesn't increase the amount of data captured or the processing throughput is the difference with your cameras metaphor. As said at best it can summarise things better sometimes.
mrguyorama: Naw HN has been like this for a decade at minimum. None of the temporarily embarrassed billionaires here needed a bot to simp for rich people.The entire point of the forum is to talk about rich "idea people" and the businesses they start to get richer.
vanillameow: Regardless, I think if you are thinking purely from a ruthless business standpoint then standing up to the DoD was an incredibly ill-advised move. It's basically free financial and technological backing at the cost of ethics. Additionally, basically everyone with functioning eyeballs knows that the current US administration is incredibly vindicative, reckless and short-tempered. I would agree that in a more tame administration, you might do something like this as a publicity stunt. In the Trump administration, and while the AI arms race is still in full force, it feels like there has to be at least somewhat genuine sentiment behind it, otherwise it just doesn't really make sense. Like what do they accomplish from this? You'll get some users who will view you more favourably for it but it probably won't make up for the lost revenue, and no matter how many people like you, if you are first to AGI in this industry you win. The prior sentiment basically won't matter at that point. In the most critical interpretation I guess you could say if the bubble pops it might be more of a matter of sentiment. I don't know, in my mind the math just doesn't work for it to be a business move.
freejazz: >Regardless, I think if you are thinking purely from a ruthless business standpoint then standing up to the DoD was an incredibly ill-advised move.It wasn't, there's been non-stop talk here for days about how Anthropic is a step-above, better-than-the-rest, the "only good AI" company. Enough already. It is a marketing tactic they are taking in opposition to OpenAI.
ethbr1: > 3. We just bombed Iran and killed their supreme leader.Being accurate, by all reporting Israel killed Iran's leadership.Yes, likely enabled by US intelligence, but the one who pulls the trigger does matter.
ImPostingOnHN: "We" here clearly means USA+israel. There isn't a distinction between the two when they're working towards the same goals, bombing everything in sight, together.The one who pulled the trigger is irrelevant here, because both have pulled the trigger hundreds or thousands of times in the past few days, dividing up targets between them for the joint operation.
ethbr1: Given that direct assassination is still prohibited by EO 11905 / 12036 / 12333, it's a major issue if the US president ordered the strike or not.I'm aware that internet forums like to play fast and loose with insinuations, but facts are facts.
ImPostingOnHN: > Given that direct assassination is still prohibited by EO 11905 / 12036 / 12333It sounds like you think this means something?Obviously it doesn't when we're talking about an administration that openly breaks laws, much less EOs, and issues whatever EOs they want saying whatever they want, even in violation of previous EOs. There aren't even any repercussions to the president "violating an EO".So no, the pedantry here is irrelevant. The two parties are on the same team, working towards the same goal, doing the same things, divvying up the list of targets to strike.
snowwrestler: A relationship with whom? The people running the “Dept of War” will not be running it for long. Defense Secretaries rarely last for an entire 4-year administration, and this president is term-limited.Whoever Anthropic pisses off today will be gone long before the great global games of AI and violence are won or lost. And conversely, the same is true of the people that OpenAI are sucking up to.What this looks like to me is two competing strategies for government relations. OpenAI is tactically making nice with who is in power now, and undoubtedly believes they will be able to continue making nice with whoever comes in next. I’m old enough to remember PG saying Sam Altman had done more than anyone to help Hillary Clinton get elected. SamA seems to be a make-nice guy, in general. This strategy has worked well for many companies BTW.Whereas it looks like Anthropic is taking a stand and hoping that it will pay off when the political winds change. And I don’t mean political party specifically, but the national mood across party lines. Coming into this administration there was thirst for rule breaking, institution breaking, and weariness of limits and “supposed-tos.” Now people are seeing the effects of weak institutions and unlimited arbitrary executive power and it’s not polling well.This is not new territory for the country, following 9/11 there was great tolerance for “anything to beat the terrorists” which gave us the Patriot Act and the Iraq War. Both are widely repudiated across party lines now, though. What will be widely repudiated 5 years from now? 10 years from now?
vasco: That's safety, not alignment. Alignment is necessarily to the user.
1718627440: I would say AI is very much increasing the processing throughput of labeling surveillance data.
freakynit: Submitted "boomcession" to urban dictionary (in review)... sorry, couldn't credit you as there was no place to do so.
aprilthird2021: > why should a government supplier and private company (Anthropic) and 1 man there - get to decide what an organization with elected officials can and can’t do?Misunderstanding of what is happening. They have terms and conditions with their private property that anyone can choose to accept or decline. The DoD wants to them turn around and say these terms for a private company's contract around licensing of their private property are so egregious that the government and all government contract holders should be forced out of using any products by that company
DaedalusII: my point is the government/DoD/DoW/state can set conditions that it wants, and do what they want. DOD can literally invoke DPA with trump and force anthropic to sell them the product"On October 30, 2023, President Biden invoked the Defense Production Act to "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government" when "developing any foundation model that poses a serious risk to national security, national economic security, or national public health."https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950
genxy: Everything I don't like is pretty broad brush. I have only used it with the Trump regime.https://en.wikipedia.org/wiki/Ur-Fascismhttps://www.rollingstone.com/politics/politics-news/trump-su...
delaminator: Oh no, it's retarded!
Frieren: Capitalism is dead. Welcome to the age of aristocracy were image and standing with power is more important than creating any value whatsoever. Lies are rewarded and action on good faith punished.It is a shame that CEOs act like Kings and Queens and there is no accountability anymore. This concrete example is just part of a bigger trend to lie to the public and get away with it.
salawat: >Why we didn't held the people working accountable, in a similar fashion as to the past.You mean other than Nuremberg which was focused on prosecuting war criminals? Remember, IBM was never held to account for propping up the Nazi war and genocide machine. People who made Zyklon B weren't either. Corporate governance is the instrumental convergence of our time.
freakynit: It's live: https://www.urbandictionary.com/define.php?term=Boomcession
hedora: Not my term: https://www.thebignewsletter.com/p/the-boomcession-why-every...It's nice to indirectly contribute to the Urban Dictionary though.(Not sure if the link is the original source of the term; but it's before me in the chain.)
freakynit: Hmm... but as you said, good that it got added in urban dict. :)