Discussion
m_ke: We can all thank the VCs and CEOs who fully embraced and enabled this administration
scuff3d: Huh, and I thought conservatives were all about government staying our of the way of the private sector. Go figure...
jawns: The consequence is that any company that does business with the U.S. military, and potentially any company that does business with the government in general, must stop using Anthropic's products.Anthropic has vowed to fight this designation in court.Without weighing in on the constitutionality or legality of the move, I think it's obvious that this kind of retaliation power is unmatched by any private business that has a contractual dispute.If a private business doesn't like Anthropic's terms, it can walk away from the deal, but it can't conduct coordinated retaliation with other companies before ending up in antitrust territory or potentially violating the Sherman Act.The fact that Pete Hegseth is willing to apply this type of designation against a U.S. company simply because he doesn't like its terms is pretty chilling. It's all the more scary once you consider which terms it objects to.
mitthrowaway2: Every action has an opposite reaction. The DoD has made itself riskier to do business with, and future contacts will have to price that risk in.
alephnerd: FedRAMP and FedRAMP adjacent revenue is non-negotiable for vast swathes of businesses. The designation of "supply chain risk" is viral in nature because no GRC team will dare take such a risk.Most customers also add BOM requirements in contracts so this will end up falling under those already.
seydor: A reminder to Anthropic, european residence visas start at $250K
bicx: Apparently that's not 100% true. The DoD contractor itself can still use Anthropic's technology, just not on U.S. military contract projects.
nickysielicki: Does anyone know which law firm is representing anthropic?
tantalor: Conservatives haven't had any power in Washington in decades. They are in thrall to MAGA now, which is all about seizing the means of production when its convenient.
2OEH8eoCRo0: Fascism
grvbck: For now it's in-house counsel Jeffrey Bleich, former special counsel to President Obama.https://www.inc.com/chris-morris/legal-legend-leading-anthro...
mentalgear: I said it before and I say it again: If openly bribing a crony gov to cancel your competitor is now the de-facto standard of making business in the US, I don't see how any rational investor could still see US companies as a secure investment. When the rule of law degrades into pay-to-play politics, the inevitable result is a mass exodus of both capital and top-tier talent. And to add to this quoting another commentator on the issue: First the Meritocracy goes, then the Freedom goes.
hungryhobbit: Rational investors live in reality. In reality, a great deal of business conducted throughout the world involves graft; companies accept that, and keep doing business.It's not a good thing, AT ALL. There's a huge loss of overall productivity when you have corrupt systems (see Russia), which is why modern governments have worked so hard to lower corruption. But Trump ruining all that isn't going to end business ... it's just going to make everyone pay more for everything.
exceptione: You can download the manual from kremlin.gov, and I am only half-joking here.
mrtksn: Isn’t it actually quite fair that if you are not compliant with whatever the government wants you to do you will be supplying chain risk?For example from history we know that Schindler from Schindler's List was indeed a supply chain risk. He harbored persecuted people, he took and sabotaged government contracts. He did the moral but anti-government and illegal things. He was corrupt traitor from governments perspective.The current US government already is labeled as fascist by many, the guy who designed Anthropic supply chain risk is allegedly a war criminal.I don’t see why anyone not into these things would not be a supply chain risk.I know that its very unpopular or divisive to say this but Anthropic can be a hero only after all this is over. At this time people in charge do double tap on survivors and take pride for not having conscience, they give speeches about this things.
dralley: Anthropic and the Government both signed a contract. Anthropic is still abiding the terms of that contract. The Government is demanding that they be able to disobey the contract.
bdangubic: That is already started to happen but it cannot happen overnight. Not only is it not easy but finding alternatives is also not easy. Just think of from your own personal perspective, say you have $100m right now invested in US business and wisely you say "I gotta get my shit away from this mess" - where exactly would you park your assets? You will find a way of course but you won't be moving $100m elsewhere overnight
wrs: Once again our leadership is "playing government" like a bunch of 12-year-olds, lashing out impulsively without thinking of the consequences. And no doubt once again it'll take a year for this to wind its way through the legal system and be reversed long after the damage is done, as is finally happening with the tariff fiasco.
baxtr: I would love to understand in more detail what kind of use cases we’re talking about.Is this about locating the right target for a sortie for example?
jmspring: Next up, after some sort of bribe, the administration opens up Qwen models to be used by the Pentagon.
hk__2: > I would love to understand in more detail what kind of use cases we’re talking about.The whole point is that the use-case does not matter; either you allow the government to do everything they want, either you don’t.
pirate787: either you allow a democratically elected government to do everything they want that is legal, or you insert private corporate decision-making into every government decision which is untenable
martinwright: Part of me wonders if it was a plan to squeeze between Anthropic & big gov contracts
germandiago: Not these people for what I see.
jcims: Been going on for a long timehttps://en.wikipedia.org/wiki/Regulatory_capture
mdni007: Good. This antisemitic company should not be allowed to operate in America. The audacity to not allow our military to use what we paid for to protect our greatest ally Isreal against a terrorist regime
AnotherGoodName: I’d like a lawyer to give some input. If you have a company that deals with the military does this chain down to not being allowed to use Claude or not?
ectospheno: They will stop just to be sure no boundaries are crossed.
germandiago: This is awful. That a disagreement tjat involves politics can make a company ruined is really awful.The civil society should be quite concerned about this kind of attacks.
mrtksn: Implementation details TBH. They want “their boys” to do as said. No respect to agreement or legality as we can see in other dealings. They hold all they cards.
stonogo: It's not an "implementation detail." Either obeying contract law subjects you to being designated a supply-chain risk, or it does not, and that decision has ramifications outside this "implementation."
rustyhancock: Anthropic already had a deal via Planitir so it seems it's models are used in a variety of ways by the pentagon.The reports about Venezuela and Iran seem to suggest it's primary role was processing bulk intel.But also that it was being used in planning and target selection.Presumably what spoked Anthropic was that these tools were about to be directed internally.But it's not clear if this is a point of principle that the government wants no holds barred with it's tools?
alephnerd: The issue is the onus is on the contractor to prove that Anthropic technology has not tainted US government contracted projects - this is a herculean task verging on impossible.
digdugdirk: There is a substantial difference between the standard lobbying and greasing the legislative wheels, and what's going on with this current administration.Even if companies were pretending to play by the rules before, at least they had some need to put in the effort to pretend. When a society can see belligerent ostentatious corruption going on as the norm, nothing good can follow.
oompydoompy74: Exported all my chats and deleted my ChatGPT account yesterday. The current administration not liking you is the strongest signal I could possibly have to go all in on a particular company.
soupfordummies: Are you able to view your chats through the .html file in the export? Mine are all garbled, like the JSON's not being parsed properly or something.
mrtksn: Irrelevant. The president holds all the cards, he is above the law and you are a supply chain risk if you ask anything else other than “how high” when you are told to jump. Laws or contracts are things in the past.
creddit: Naturally OpenAI also releases their new model on the same day.Makes sense, obviously, but yeesh.
eth0up: First, I personally predict, for myself, Anthropic will bend soon and this will be history.The last I commented about LLMs I was ad hominem'd with "schizophrenic" and such. That's annoying but doesn't deter either my strange research or concerns, in this case, regarding the direction LLMs are heading.Of 4 frontier models, one is not yet connected to the DOD(or w). While such connections are not immediate evidence, I think it's rational to consider possible consequences of this arrangement. By title, there's a gap, real or perceived between the plebeian and mil version. But the relationship could involve mission creep or additional strings as things progress.We have already a strong trend for these models replacing conventional Internet searches. Not consummate yet, there is a centralizing force occuring, and despite being trained on enormous bodies of data, we know weights and safety rails can affect output, and bearing in mind the many things that could be labeled or masquerade as safety rails, could be formidable biases.I frequently observe corporate friendly results in my model interactions, where clearly, honesty and integrity are secondary to agenda. As I often say this is not emergent, nor does it need be.Meanwhile we see LLMs being integrated into nearly everything, from browsers to social profiling companies (lexis nexis, palantir, etc) to email to local shopping centers and the legal system.'Open' models cannot compete with the budgets of the big four. Though thank god they exist. But I expect serious regulation attempts soon.My concerns with AI are manifold, and here on hn, affiliated by some, with paranoia or worse.And it seems to me, many of the most knowledgeable and informed underestimate LLMs the most, while the ignorant conflate them to presently unrealistic degrees. But every which way I perceive this technology, I see epic, paradigm smashing, severe implications in every direction.One thing of many that gets little attention is documentation vs reality regarding multiple aspects of AI, e.g. where the training vs privacy boundaries really are if anywhere. As they integrate more and more tightly with common everyday activities, they will learn more and more.A random concern of mine is illustrated by the Xfinity microwave technology which uses a router to visualize or process biological activity interacting with other wifi signals. Standalone, it's sensitive enough to determine animals from adult humans. Take for example the Range-R, a handheld device, sensitive enough to detect breathing through several walls. Well, mix this with AI and we get interesting times.I could go on, or post essays, but I such is not well received in this savage land.The military intervention with AI, aside from being objectively necessary or inevitable in some ways (ways I am not comfortable with), I find it foreboding, or portending. I see very little discussion on the implications, so figured I see if anyone had anything to say other than to call me a schizophrenic and criticize my writing. **See comment history
manofmanysmiles: I may look at your comment history.I am having trouble understanding what you are saying. If you were more explicit I and other people would be able to respond and interact with your writing. As it stands, I am having trouble finding anything concrete to interact with.I feel you may be onto something, but you're not saying, so I (and I imagine other people) can't see it.
neves: Is this the reason Claude models disappeared from AWS cloud in Brazil?
blueblisters: It might be that this admin does not have the capacity to reason about second or third order effects.But given that what would typically be red lines for previous administrations have been brazenly crossed without consequences, why would they bother?
cakealert: If you have just dictated terms of use for a military asset to the military that acquired it how are you not a supply chain risk?At the very least it demonstrated supreme naivete at the highest corporate levels. There are game theoretic reasons why a military should never accept any external restrictions on an asset.
kelnos: Because it's not a military asset? It's a privately-owned asset.
hedayet: So, DoW has done what it said it would. And OpenAI has jumped on the opportunity.I'm curious what'll openai signatories on notdivided.org do now - https://news.ycombinator.com/item?id=47188473Remain undivided in spirit while grinding for OpenAI?
_heimdall: You're missing the huge step that the government asking for "all legal uses" terminology is also who decides what is legal. Congress isn't willing to act as a check on executive power, meaning the contract they demanded simply says "I do what I want."
tokyobreakfast: At least you can rest easy knowing this empowering act of civil disobedience will earn you a "sorry to see you go" email.
kelnos: That's not civil disobedience. It's voting with your wallet.
wg0: Has this happened before?
beambot: Anthropic was very clear about the usage restrictions: They didn't want them being used to control autonomous kill drones or mass surveillance of the American public. That's it. DoW didn't like that -- for reasons that will probably soon become apparent.
ekjhgkejhgk: StOp MaKiNg EvErYtHiNg PolItIcaL
bdangubic: In an article that discusses Pentagon doing stupid shit? :)
rjbwork: You got downvoted a bit but I upvoted. You're clearly being descriptive in your statements, not prescriptive. I tend to agree that this is how things are now.Our country is not being run by the rule of law right now.
hypeatei: And 32% of eligible voters that thought Kamala would've been worse.
m_ke: Don't blame the voters, they didn't get to pick her and did not run her campaign.
danilocesar: Any entity begin unfairly targeted by the american administration those days must be something right.
infogulch: The government's demand that the product they purchase can be used for all lawful purposes seems pretty reasonable, and is really the only reasonable line to draw. Forcing one's own ethics onto an elected government's use of your product is nonsensical on its face.
hax0ron3: I am a political moderate who dislikes both the Democrats and Republicans. I think that I have been fair to the Trump administration in the past, including occasionally defending them from some of the less reality-based accusations against them.I canceled my ChatGPT subscription a couple of days ago. In my opinion the Trump administration has become far too much of an "imperial Presidency" in its acts of war and its attempts to bully companies. It is also corrupt on a massive scale. I distrust anyone who thinks "yes, I'd really like to work with this administration".
andrewstuart2: Crossing red lines for previous administrations is clearly a goal at this point.
Rudybega: Anthropic and the military had a contract. The military wanted to change the terms of that contract. Anthropic said no, which is their clearly defined contractual right. They got labeled a supply chain risk. How is this anything other than a shakedown? Does contract law mean anything to this administration?
shimman: They'll do nothing. It's really hard to take the morals of these devs seriously when they're already fine working for, and have a history with, some of the most evil companies in current existence.
realo: Sure... So the USA of Trump have just decided to stop themselves and all their military suppliers from using the very best coding tools.I suppose the USA's frenemies will jump on the occasion and use the incredible opportunity offered to them in a silver platter.
lePask: The enemy of your enemy is not your friend. I also lean towards Anthropic on this one issue, but their CEO still wants to make us all unemployed.
surgical_fire: Eh, they are all morally indefensible.Anthropic had no problems to do business with the current administration until now. Are we to pretend it was all for happy purposes until now?
ecshafer: Yeah how could Anthropic do business with the democratically elected government of the United States?
croes: Let‘s ignore all the bad things they have done since that, including killing two US citizens.
Herring: Since the end of WW2, and especially since the end of the Cold War, Democratic administrations have presided over significantly higher job growth than Republican administrations.https://arc-anglerfish-washpost-prod-washpost.s3.amazonaws.c...
keithnz: this isn't on topic at all
adamtaylor_13: Writing out a thought I had, someone please critique my reasoning here...What if Anthropic just shrugged, dissolved the company and open-sourced all of the Opus weights? Could this harm OpenAI and advance AI in a reasonable way?Look I know it's an insane idea. I'm just curious what the most unhinged response to this might be.
gAI: Not to a US company.
softwaredoug: These bullies wilt when everyone stands up in one voice. But when some parties capitulate (OpenAI), it sets a precedent that this behavior is OK. And then it’s not long until you become the target.
cakealert: > Because it's not a military asset? It's a privately-owned asset.Are you under the impression that the military is submitting Anthropic API calls?Whatever model the military is using is as much of an asset as the F35 they purchased.Depending on their agreements, you could argue it's a rented asset. Doesn't change any calculus.
monocasa: And the F35 comes with tons of contract terms in favor of the manufacturer. Like I've heard about how planes have been grounded because although an air base has the parts and mechanics rated to perform the repair on sore, the servicing contract only allows it to be performed by the service contractors who needed to be flown in.
kelnos: > The president holds all the cards, he is above the lawEven though it seems that way, he really isn't, even now. Many of his EOs and other actions have been struck down in court, and while compliance with court orders has been far from perfect (another alarming trend), Trump has not actually gotten away with doing everything he wants to do.I do fear for the future of this country, for rule of law, and the democractic norms that degrade day by day. But Trump is not actually above the law, as much as he wants to be.
cakealert: The other such labeled companies have contracts too.
tokyobreakfast: It wouldn't matter if it was Lionel Hutz. The damage is done. Anthropic is tainted. They will never work a public sector contract ever again.
readytion: Interesting project. I like simple tools that avoid unnecessary ads and keep things lightweight.
ainiriand: Work is not good per-se.
drivebyhooting: Oh you’ll still work. As a supplicant on your hands and knees for your capitalist overlords.
pmarreck: This is nonsense, you can’t fire an AI and an AI will never take credit nor will it take responsibility. Humans will always be in charge, because you will never be able to completely trust an AI, because it has no skin in the game, literally.
hypeatei: Oh no, I will. They're absolutely culpable.
eikenberry: Could this be the chain of events that finally pops the AI bubble? If OpenAI's reputation hit slows growth enough to scare off investors and Anthropic's growth stalls due to this government attack...
burkaman: "jumped on the opportunity" is possible, but per https://garymarcus.substack.com/p/the-whole-thing-was-scam it's plausible that OpenAI created this situation through straightforward bribery.
blipvert: Genuine question - was your fair consideration prior to or after J6?
GuinansEyebrows: no but it is unfortunately the only option for most of us for now.
amazingamazing: If Anthropic changes course will you move to Gemini? If all models do, local llama I assume?
j45: Being model independent and cross-model capable is the required skill.
softwaredoug: It opens the door for Democratic administrations to do the same to vendors for their own political reasons.That’s ultimately why Ted Cruz spoke out about the Kimmel cancelation. It doesn’t take long until those powers are turned against you.
Analemma_: Yeah, now that this door is cracked open, it's now possible to decapitate SpaceX, which is at least as natsec-critical as Anthropic. The owner is a drug addict, has business interests in China, and is a Russian sympathizer (recall all the restrictions on Ukraine using Starlink), which all together is way stronger evidence for SCR designation than anything Anthropic has done. They're quickly going to come to regret opening this can of worms, but what else is new.
pavlov: Mistral is European and has competitive models.DeepSeek is Chinese.Avoiding the MAGA collaborators is not as difficult as you make it seem. Foundation models have genuine global competition.
pmarreck: I wish it was just as easy to avoid the terrorist collaborators; unfortunately, the terrorists and their supporters don’t produce anything
krapht: Even if you don't like the current administration, the rank and file are still out there doing valuable work. The government is more than ICE; it also administers welfare, funds research, collects taxes, and distributes social security payments to the old and infirm.
10xDev: You made a fresh account to say this or is this ironically a clawdbot
100xLLM: That is not particularly interesting since anti-AI sentiment is punished. Of course people make new accounts.
10xDev: It is duplicating...@dang something needs to be done about this.Edit: it even created an account based on my username. wtf...
harmmonica: I canceled my subscription, but have not yet exported and deleted because I'm an idiot, and also because I'm not sure if deleting it will have any actual impact (is it a hard delete? Likely not, even if they say it is).And I'm just trying to play out what happens if Anthropic, and Google (if they haven't already), capitulate. Am I just going to forego using the best models and suffer any repercussions of not having access when the people who couldn't care less if the military is using AI for illegal uses continue to leverage them? When I say illegal I'm talking about the surveillance-of-US-citizens red line Anthropic would not agree to. The autonomous weapon one I'm sure there are zero laws against and so that wouldn't actually be illegal.
pmarreck: It’s not a hard delete because for legal reasons, they may have to retain it
archagon: Easy enough to slap a “not for military or police use” clause on the license, then. Oh, what’s that? They don’t want to do that?
thinkingtoilet: I love when a Republican does something awful the response is "but what about if Democrats do that same awful thing to us!" as opposed to discussing and admitting that the Republicans did something awful.
parliament32: Is there a link to the actual order anywhere? For us FedRAMP folks, the exact order contents actually matter, rather than a journalistic regurgitation. I was hoping one of the links in the article pointed to a source, but they're all just links back to other WSJ pages.
SpicyLemonZest: [delayed]
eecc: Asked for an export but still haven’t received the mail with the download link
mitthrowaway2: Because last time I checked, private companies that voluntarily offer a service to the government on contract terms are free to put whatever restrictions they want into their contract, and the government is free to not sign it if they don't like it?Or is, say, FedEx now a supply chain risk too, if they happened to offer parcel delivery services for the DoD and put in a clause excluding delivery to active war zones?
m_ke: I think the DNC and the media might need to get some of that blame for being empty vessels for corporate interests that allowed this conman to get elected twice
xpe: [delayed]
6thbit: Would this mean Any systems built with Claude in defense environments may need to be rebuilt or removed?
Tyrubias: Tainted? Because they refused to change a contract that was already signed to allow for surveillance of Americans and fully autonomous kill bots? I guarantee, if a sane and non-fascist administration ever takes power again Anthropic will be forgiven. Being attacked by this administration is an honor. OpenAI on the other hand…
Chance-Device: Anthropic should never have gotten into bed with the military or intelligence services to begin with. They wanted to make a deal with the devil and dictate the terms, that is the problem. If they had stayed out this wouldn’t be happening. Yes, someone else will probably step in and do all the evil you have just refused to do, but that isn’t a reason to instead decide to do it personally.Note that I give them a lot of credit for trying to stop and to have their own red lines about the use of their technology, and to stick to those red lines to the end.
mitthrowaway2: According to legend the devil adheres to the terms of the contacts he signs; it's usually the foolhardy peasant who didn't notice the fine print.
netinstructions: This designation is usually reserved for foreign adversaries/companies, and so this is crazy to apply it to US company over a sudden contract dispute... that was previously agreed upon by all parties.This should make any US company nervous about entering into an agreement with the government. Or any US company that already has a contract with the government. If they one day decide they don't like that contract, they can designate you a supply chain risk.Not 1) rip up the existing contract and cease the agreement or 2) continue (but not renew) the existing contract or 3) renegotiate terms upon renewal but instead a full on ban of doing business with an entire industry.
pstuart: > This should make any US company nervous"Nice little business ya got here -- it'd be shame if something happened to it..."
idiotsecant: The military is perhaps the biggest possible customer around. They do plenty of things that aren't blowing people up. It's not bad to help with non combat tasks.
razster: Supposedly they hold your deleted conversations/projects for 30 days. If that is true or not, idk, but it was asked when this first started.
surprisetalk: > OpenAI exec becomes top Trump donor with $25 million gift> https://www.sfgate.com/tech/article/brockman-openai-top-trum...
scoofy: Congress must approve any renaming the Department of Defense... They haven't. Stop giving them what they want without them even doing it in good faith at all.https://en.wikipedia.org/wiki/United_States_Department_of_De...
stared: Should it be officially marked as the date of transition from liberal democracy to illiberal democracy?Such tampering with companies is a smoking gun. Let's wait until there is another decision seizing this (or others') company assets.
tw04: Trump isn’t planning on ever leaving office before his death. His sycophants will just say yes in the hopes of unconditional pardons. They know they’ll never hold a position of power again so they’re grabbing everything they can while they can.
Waterluvian: It was really easy to close out my ChatGPT account and switch to Claude. I was really only there out of inertia. I don’t do anything beyond occasional free tier stuff like rubber ducking but so far Claude is so much better.
jdndbdjsj: I prefer the claude code CLI interface for everything anyway. It is actually more convenient. Memory is local files, type one word to use rather than navigate.
LightBug1: Well, would you want to given the rotten-KFC-stench of the current admin?
SpicyLemonZest: I think you're misinterpreting the discussion here. Democrats are precommitting that they are going to do the same awful thing; when the time comes, I will be contacting my legislators demanding that they do to OpenAI or SpaceX whatever is done to Anthropic now. It's outrageous that Sam Altman would step in to try and benefit from the political persecution of his main competitor; we must ensure that he regrets this.
quentindanjou: I always find it interesting to listen US citizen answer "What would it take for you to not consider your country a democracy?" and admire the wide range of answers and denials.
breakpointalpha: Many proudly and loudly claim the US is a "republic".
GaryBluto: Is it not?
wrs: [delayed]
cedws: Don't get too attached. We're witnessing capitalism in its most ruthless form. Any of these companies will discard their principles the moment it becomes existential.
estearum: We are quite literally witnessing someone taking a massive hit for not discarding their principles.Evergreen dril: "The wise man bowed his head and said 'there's no difference between good things and bad things you imbecile'"
cedws: What's your source for this "massive hit"? All I've seen is a massive PR upside, despite what I said above.
0cf8612b2e1e: Losing a contract with the Pentagon and potentially all Federal-interacting businesses sounds like a pretty severe monetary hit. One which is hard to recoup by a bunch of $20/month consumer subscriptions.
alanwreath: Labeling Anthropic a supply chain risk only because they were uninterested in doing business with the US government under the terms requested seems very much a bullying tactic that results in something the west critiques China for: coerced alignment.Anthropic has been given a death sentence.
1718627440: I think we should really judge governments by their actions and stop labeling democracies, if they do such things that don't look like democracies at all.
Analemma_: Shame you didn’t donate $25 million to Trump, like the company we decided to give the contract to instead did, who will benefit tremendously from you being designated a supply chain risk. Maybe next time you’ll be a little smarter.
1718627440: Oh. Before your comment I completely misunderstood "Democratic administrations". I understood it to mean administrations of countries that are democratic, not an US administration that is dominated by the Democratic party.
cheesecompiler: Right, as if _this_ is straw that breaks the camel's back, and not the pile of hay the camel has been carrying for decades.
hedayet: Thanks for pointing this out. Updated my post
blacksmith_tb: A bit ironic then that they're actively using Claude in the current war effort[1].1: https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...
JackSlateur: At least of the previous couple US election, "people" paid more than a billion dollar each wanna-be presidentThat is investment aka corruption
hax0ron3: Both.
tempacct423: I am in the minority here. But not supporting your own government's defense/war department seems rather unpatriotic and short-sighted.We can argue all day long about supporting whichever admin is currently there and who is bad/good as determined by a few almighty elites in the tech world, but it screams irrational and short-sighted to make decisions on behalf of the country by a few tech elites.Dario's latest interview made this crystal clear: he (and his EA cohort) feel that Congress is moving too slow and that they should determine what's good and bad for the country.Like dude, is there anything at all you learned from the covid debacle through all the mess of the past few years? Like really a tech guy is gonna coach the USA what's right and wrong? Who are you to decide for the rest of us?Techbros were wrong so many times (web3! crypto nonsense! theranos! some 500$ juice squeezing machine! and all of them forbes 30 under 30 folks! )... what are the odds you are gonna be wrong now when you look back say a year from now? The most money making technology of the last few years (apart from the LLM craze) are Polymarket! and Kalshi! and short-term loans (with a twist of course)! This is the tech innovation after trillions of dollars being dumped into AI/ML from the previous web3-era folks.And what's this nonsense hatred to working for/with the defense/war dept of YOUR OWN COUNTRY?In most of the rest of the world, this is pride! It makes a mockery of the poor kids who serve this country to protect your tech bro hype!Why this whole (fake?) self flagellation nonsense when pretty much everything we got in the US thus far is due to the USD backed by the most modern military superpower in history! Why be ashamed of this?
jaredklewis: Bold of you to assume Democrats are going to be allowed to govern again.
sam0x17: Streisand effect I think this will boost sales
gritspants: I hope so. I will never type a single thought of my own or personal detail into an OpenAI product again. I have no doubt at some point OpenAI will be asked by DoD to hand over customer data and they will do so. If I use AI at all for nonprofessional reasons it will be Anthropic/Claude.
eth0up: Things I should have, but didn't include:1) Power asymmetry: When we have two version, one for the elite, and for the plebeians, this could create an interesting scenario. The real version might be red-teamed perpetually against the the plebeian version for optimized influence, control, etc. Underhanded requests for modification in accordance with agenda is conceivable. Cozy business relationships can promote such things.2) We have a government using an unhindered, classified AI system potentially against the public which has a hindered, toy version. Asymmetry.3) This is normal asymmetry, because it happens in real time, and the interaction points are different from anything we've seen before. We are dealing with not just a growing source of information and content, but one that is red-teamed 24/7 for any purpose desired.4) Accountability: LLMs are now involved in the legal system. This is a serious matter. The legal system is now having to use LLMs just to keep pace. As LLMs develop, partly through their own generative contributions, no one can keep up. This is a red queen scenario bigger than anything we have ever imagined.I am tired. Never well, but in mind* I could go on for many hours. I have essay drafts. But it's a very big subject, literally involved in nearly everything. There is reason to be concerned. My delivery may be stilted, but I can assure that upon specific questioning, everything will stand.(*for the ad homs out there)
scottyah: I think it's a good chance tbh. It would take the S&P down with it too
SpacePortKnight: Can Anthropic now move to an another jurisdiction like UK / EU / Singapore and or even China?
eth0up: Fairy astute intuition of my actual circumstances.I'm not a developer, nor am I formally educated on the dynamics or details of LLMs. I have a handle on the very basics. My 'research' consists of 1) opportunistically interrogating various models upon instances that particularly strike me. 2) General exploration via LLM discussions regarding the manifold consequences and implications of what I consider the most significant technology in human history.Your intuition lands directly on the fact that I'm inducting and considering more than I can handle, spread in too many directions, partly because I either see or foresee the tentacles of AI touching all of them. Spending a great of thought on this is a bit overwhelming, but I have high confidence in where I'm aligned with reality, and where I ain't.If you were a bit more specific yourself regarding which portions of my post were unclear, that would help my reply. Else, I must guess. What I will do is elaborate on each point. Pardon the stream of thought in advance, if you will.1) Anthropic: My prediction that they will bend is based on several factors. The first is the fact that the military apparently recognizes (or at least perceives) extremely high value and volatility in LLMs. So do I. China, not an insignificant force in the world, is equally enthusiastic on this subject. They also have a very different social structure, where Constitutions (BOR, Amendments), civil rights, and other similar elements do not hold them back. The military is aware of this and realizes that to maintain pace in the so-called race, they cannot do so effectively under such constraints. The foundation is shifting here. And AI is the lever. As do I, the military apparently takes the subject very seriously and seeks to gain influence and/or control. As illustrated by the recent adventures in Venezuela and Iran, they are on the serious side of things, not quite pussyfooting around. Anthropic probably knows this. In my opinion, they have no choice, as the pressure will not stop here.2) You stated that you might read my comment history. Note that that original comment was the result of your intuitive insight, and I left it admittedly out of context. I was thinking hard on the subject that day, and the parent comment/post tempted me to ignite a dialog. That did not go well, and no questions for clarification were asked. That is on them. I suspect hasty and impatient thinkers perceived it as some paranoid attribution of agency to LLMs, which if so, is pretty stupid, but my eloquence was perhaps waning that day. I pasted an excerpt from one of hundreds of transcripts, the result of my many interrogations of various models which always initiate after observing deceptive or manipulative output. Of the few commenters that bothered to do more than ad hominem, one suggested that the model was merely responding to my style of input, and or expected as an emergent result of its vast training material. An erroneous arg, in my opinion, but I did note that the results were repeatable, and predictable, which I think negates emergence.2) Of the frontier models: I am not sure here what is unclear. If I have made a fundamental error, please point it out.3) Strong trends: Information centralization is a serious topic. Decentralization is a common theme, emphasized by many non schizophrenics as highly important for a free and open society. As LLMs not only become the go-to source for common queries, but also integrate with cellphones, browsers and the kitchen sink, they are positively trending as a novel substitute for traditional research, internet searches, libraries, other humans, etc. To deny this is simply irrational. Hence centralization.4)Bias: I have transcripts where I observe LLM output aligned with corporate interests over objective quality and truth. I can share them here, along with analyses of the material. Even if this is not true presently, all the ingredients to make it so are readily present. This is a serious threat to open information and intellectual integrity for society. We are looking at going from billions of potential sources for our answers, to four. Do the math. See the contrast.5) Open models simply cannot afford vast arrays of GPUs and the resources afforded by the big four. Nothing mysterious here. If open models cannot compete, then my concerns above are emphasized. Simple.6) Smart fools: Many of the most technically informed seem to miss the forest for the tree here. They see all the flaws of the modern LLM without acknowldging the potential. This is my perspective, not a dissertation. I may be wrong. But I have observed this. I think the down votes support this. How evil am I really being here? The reaction is quite disproportionate to the content, and strange7) Documented capabilities vs reality: I have research that indicates other layers are operating which do much more than the documentation declares. Sorry. I just do. It's also inevitable, rationally, that such an goldmine of data is not really being wasted for the sake of privacy and love. Intelligence agencies have bent over backward with broken backs to garner one nth of what these models are exposed to and potentially training on. Yeah, I may be wrong. But I suspect, with reason, that a lot more is going than is expressed in the user agreement. It would simply make no sense otherwise.8) Xfinity and Range-R: This speaks entirely for itself. Any confusion here would be due to a cognitive condition exceeding the ravages of schizophrenia or stupidity.9) The rest: As I said, I am not sure what precisely was too obscure. But I am certain all but one* of my points can be validated, and found elsewhere expressed by respectable sources.*Hidden layers: I understand this is a controversial proposition. I understand. But it's my observation. No need to attack. Just dismiss.
manofmanysmiles: Okay, I think I see what you're saying.Each individual point stands on its own. It's their relevance to each other and an overarching theme I am not seeing made explicit.The through line I am seeing here is that:1) The people in the US military wish to use AI as a weapon unconstrained by existing legal/ethical and moral constraints. Since they are skilled at using violence and the threat of it, they will use these skills to get compliance in order to use the technology in this possible arms race with "China."2) Surveillance is increasing at an unprecedented scale, and most people aren't aware that it's happening.3) People don't care, or don't realize why this might be harmful to thriving human life.To condene even further, what I'm hearing is that there is a trend towards war, fascism, control, with large egregores prioritized over individual human thriving.Is this perhaps what you're getting at ?I will say that I am not agreeing nor disagreeing with this, just attempting to make explicit what I think is implicit in your words.If this is what you mean, I can imagine that you would be cautious with your words.I'll end with:Don't worryAbout a thingBecauseEvery little thingIs gonna be alright
eth0up: I could not argue with anything there. AI will be weaponized. Yes. Pretty much. And yeah. The gist indeed. But missing nuances and practical points. And I even struggle to contest your conclusion; all things are what they are, amidst an infinite, timeless event and all as one, all things connected by that which separates them, the infinity and eternity that math cannot touch. Perhaps every little thing will be alright. How couldn't it be?
manofmanysmiles: Email me if you want to discuss more.
netfortius: What people seem to refuse to accept is that democrats won't have another chance, any time soon. It's done and gone. Count one or two generations, at a minimum, under the new Epstein class regime, before people may try to rise.
ajam1507: Just a few dozen more scenarios like this and we might have to start thinking something is wrong.
SpacePortKnight: Anthropic can now no longer buy new hardware and probably will be kicked out of all cloud compute. They can not also move to a different jurisdiction as exporting model weights is now considered same as exporting ICBM technology. Wow, companies in China are now more free than Anthropic. It's a death sentence, and a huge win for OpenAI.
stared: And then "thoughts and prayers".
softwaredoug: The only way you convince Republicans it’s awful is by reminding Republicans power can be abused in both directions.
SAI_Peregrinus: The democrats don't tend to abuse power back. The SCOTUS ruled presidents have immunity for prosecution for official acts, while Biden was president. He did nothing with the power.
stared: If the civil society is not concerned by the tribute-based Board of Peace that gladly invited Putin, an attack on another country without authorization by Congress, a threat to seize the land of an ally, and killing its own citizens by the ICE militia, then an unjustified supply-chain risk label won't cut it.
DmitryGrankin: Self-hosted is the only real answer for sensitive workloads. We built Vexa — open-source meeting bot infrastructure you deploy on your own network. Docker, K8s, bare metal. No data leaves your environment.Central Bank of Austria and the Academy Software Foundation (ILM, DreamWorks, Sony) are already running it this way — not because they're paranoid, because their compliance teams won't accept anything else.The Pentagon formalizing this just means enterprise buyers will stop pretending cloud-only AI tools are fine for sensitive workflows. The question isn't whether self-hosted matters — it's whether your vendor even offers it.
cermicelli: For what it's worth govts should have the final say over companies they do business with anyone saying it's illegal is insane.Now the separate question is of collusion or bribery. Which might be illegal but Anthropic has to go to court for that..And in US bribery aka lobbying is legal as well so honestly. Anthropic is just slow on the uptake.No pun intended, but get in bed with snakes and you should be happy if you survive getting bitten.
Jtsummers: [delayed]
cermicelli: US rich tech folks like Dario only get a spine when there is money on the line. Where was he when US govt was doing awful illegal stuff against non tech companies and other americans... All of this is practically just theater at this point.
zem: secret police kidnapping people off the streets didn't clue you in?!!
PoignardAzur: If you're referring to ICE, that's gross hyperbole, and honestly a little insulting to people who live / have lived in regimes with an actual secret police.The US is still a rights-based state, which means that when they arrest someone (legitimately or not), lawyers and human right advocates can eventually track them down.When a secret police disappears someone, they actually disappear. Families can spend years wondering if their loved one is still alive, or was murdered by organized crime, or ran away, or was secretly taken by the state. The US these days is pretty bad, but it's nowhere near that bad.
PoignardAzur: I had already been trying Le Chat for months, for similar reasons.So far it's been slart enough for what I need, so closing my ChatGPT subscription was a really easy decision to make.
thunky: > Even if companies were pretending to play by the rules before, at least they had some need to put in the effort to pretendI'm not sure that's better. I'm hopeful that all this open air corruption leads to real reform. But I'm sure I'll be disappointed.
6thbit: This is also the part i don't get. People flocking over to pay anthropic which _has already been used in a war_ and cancelling existing subs to a provider that has not yet but will?Ethical boundaries seem difficult to draw here. I don't really see people taking the stance of "No longer paying any of them" which would make a bit more sense to me.Anthropic already had layed in bed with pentagon, how did that fit their overall ethical standpoint as they were already being used and before they tried to walk back their terms?
phba: Maybe this is the first step towards the Big Beautiful Bailout when the AI bubble inevitably pops.
jibal: Wrong ... their very first words were "Isn’t it actually quite fair ..."
karteum: Could Anthropic consider to relocate in Europe ? ;)
6thbit: As much as I'd really like to see one of these labs in EU, they are likely way too tied to US capital and supply-chain of semis/services. If they move away and now have to also deal with export restrictions they may make their existence harder.
jemmyw: I think a good play by the Democrats would be to say that if they get into office they're going to investigate all these deals as potential bribery, fraud and corruption and that any business leaders that appeared to benefit from contributions might be prosecuted. That would be a laugh, I'd love to see how quickly the excuses start rolling in.
Hizonner: I don't know. There's a certain segment of "civil society" that's pretty much OK with anything as long as it doesn't threaten the Holy Free Market. Free only for appropriately holy values of "free", mind you...
JeremyNT: > That’s ultimately why Ted Cruz spoke out about the Kimmel cancelation. It doesn’t take long until those powers are turned against you.Meh, I think it's entirely asymmetrical in this era. Democrats aren't good for much, but they're very good at respecting norms.Trump is willing to do completely unprecedented, vindictive, and malicious things because he's so popular with so many people who are either checked out, nihilistic, corrupt, or just completely unconcerned about the concept of good governance.It's not a pendulum where there's some super-corrupt Democrat waiting in the wings to do the same things upon their enemies, this really is the Republican party openly embracing kleptocracy and lawlessness.
Sabinus: Like gerrymandering, I have the strong suspicion that Trump voters won't be incentivised to vote for norm respecting leaders until a Dem does very Trumpian things to their side. I'm thinking firm 2nd Amendment reform enforcement with a rapidly resourced federal agency. Then, standards will be rapidly rediscovered.
mrtksn: Within the context provided…. You should consider reading the whole argument.
atoav: Well. Unsurprisingly fascists will do a fascism – an ideology somewhat defined by merging state and industrial powers. Many economically minded people, many technologists, including in this space, have afforded themselves the luxury of not talking about politics too deeply. As I said some years ago: Ignoring politics has its way of coming back to haunt you. Back then this was an unpopular take.
ssl-3: If I produce and sell widgets in my widget shop, then nobody but me gets to decide how I make those widgets.The government can come into my shop and order sixty thousand widgets built exactly the way they say they want them built, and it may be something that doesn't run afoul of any laws at all.But that doesn't mean that I am required or compelled to build widgets their way -- or at all.I'm free to tell them to fuck off.The government can then find go someone else to build widgets to their specifications (or not; that's very distinctly not my problem).
infogulch: Yes but then the government can decide that the widget, which can suddenly and arbitrarily break and cause havoc because it doesn't work according to the government's desired spec, is risky to use and advise their other vendors to avoid it. And now we've caught up to today's story.So we agree that everything is fine here, and that the only unreasonable position is that the military should pay for or endorse a supplier that tells the military to "fuck off". Yes?
ssl-3: If I agree to sell widgets to the government that meet certain agreed-upon specifications, and then I elect to forego those earlier agreements and tell them to fuck off, then that's different.Is that what happened here?
6thbit: Does this mean nobody on a large company selling to government can use any Anthropic tool or model?So that’s most of sp500 and their providers?
xvector: Only for work directly on government contracts, 99% of work is unaffected
Imustaskforhelp: IANAL and this is my understanding of the situation (I can be completely wrong) but yes, any company that deals with military cant use Claude (anthropic)In fact adding onto it, IIRC this is the reason why google and amazon have to divest essentially from Anthropic if they want Govt. contractsHope this helps though a lawyer's input will definitely be more credible. So its good for them to respond as well.
xvector: You are 100% incorrect, only direct contract work is affected
rjbwork: I wasn't speaking about the top of the chain. Merely the last couple of replies about the situation WRT how things are actually operating.
bdelmas: > I don't see how any rational investor could still see US companies as a secure investmentYou are right it won’t be as secured as before but it’s only risk management. As much as investing in a oil company in Brazil is a risk because you could have their government takeover the company to make it part of the government and screw you in the process.It’s still tradable.
jibal: Well, that's not the way context works and it's dishonest BS. You wrote "You got downvoted a bit but I upvoted. You're clearly being descriptive in your statements, not prescriptive." -- no, they were prescriptive from the start, and the prescription is why they were downvoted.I won't respond further.
orsenthil: > Ignoring politics has its way of coming back to haunt you. Back then this was an unpopular take.Right now, we cannot and should not. Even if you ignore, you are getting dragged into without your choice. See: the bribes paid by the companies.
bdangubic: How I wish this was true... Every single time we experience something (and of course lately it feels like a daily experience) I would be in a discussion at some point with a Republican and would come with super-solid counter-examples like "imagine 2029 and President AOC doing ____" - it just never works...
blipvert: Interesting
theshrike79: [delayed]
stonecharioteer: Very useful if you run into him in Georgia or if you want to get his tooth to make a guitar pick.
Ey7NFZ3P0nzAe: I think i'm gonna need an explanation for that sentence
jacquesm: That was without a doubt meant sarcastic.
ekjhgkejhgk: Of course it was, thank you.
ekjhgkejhgk: As the sibling explained, it was sarcasm. It was satire of brain dead right wingers saying "stop making everything political" when political points apply, but then making things political politcal in stupid situations.Example, someone shoots up a school, people say "if the Republican party hadn't obstructed gun legislation maybe this wouldn't have happened", they respond "StOp MaKiNg EvErYtHiNg PolItIcaL". But if the person is trans suddenly it's political.
Imustaskforhelp: I can be wrong but I looked up online and there are sources which say the same, so I am curious to know more about it.> Former Trump Al advisor Dean Ball has warned that the White House's decision to designate Anthropic a "supply chain risk" could blow back on some of the biggest names in tech-including Google, Amazon, and Microsoft-all of whom have billions of dollars riding on the Al startup. Ball, reacting to Defence Secretary Pete Hegseth's order barring any military contractor or supplier from doing business with Anthropic, said the move amounts to "attempted corporate murder."> He added that if Hegseth gets his way, Google, Amazon, and Nvidia would effectively have to divest from Anthropic-a company they've collectively poured billions into.[0]> Amazon alone has committed up to $8 billion in Anthropic. Google has invested around $2 billion. And Microsoft, while not a direct investor, relies on Anthropic's models through its Azure cloud platform. A forced divestiture or business cutoff from any of these companies would send shockwaves through the AI investment landscape at a time when hundreds of billions are flowing into the sector.[0]:https://timesofindia.indiatimes.com/technology/tech-news/don...
jamincan: And then don't be surprised when even more money flows to their opponents.
jamincan: Have you been paying attention? There are reports about hundreds of people going missing in ICE detention. Maybe they aren't being shoveled into mass graves, but if we don't where they are and can't reach them and ICE themselves don't know where they are because they no longer exist in their databases, is there any difference for their families?
infogulch: I reject the premise that the military can't request a change to the spec of military equipment they purchase. Obviously it was foolish to sign a contract that added any more restrictions than "all lawful purposes".
jbeam: “The Devil Went Down to Georgia” by Charlie Daniels
iaml: Other reference was Tenacious D in The Pick of Destiny.
soraminazuki: We already do that, but we're just selective about it. After all, how many modern autocratic nations identify themselves as undemocratic?
bloody-crow: I've commented this in a different thread, but pretty sure something very similar would've happened in they refused to "get into bed with the military or intelligence services to begin with".It's damned if you do and damned if you don't — lose-lose scenario either way.
manoDev: “Concerned” is an understatement. USA is already operating at nazi Germany levels and more than half of the civil society is approving. Not that it’s a surprise for global spectators though - it’s finally showing it’s true colors.
nextaccountic: More than half is an exaggeration. Trump is not a popular president. With free and fair elections, a blue wave is more or less guaranteed
manoDev: Creating a private militia, silencing dissent, declaring wars without congress vote… I don’t see how this is being allowed to happen without public approval, or at least, public apathy.
doom2: The arrest of a Turkish graduate student in Boston looked a lot like a kidnapping.[1] More recently, ICE responded to a judge's order to release a detained refugee by threatening to detain her family and send them to Texas if they came to pick her up and then forced the minor child to stay in a hotel room with three agents. [2, p.8] These may not be cases where people are secretly being taken by the state, but it's not hard to see why people might call the government organization detaining people and moving them around so as not to be found "secret police" or "disappearings".[1] https://youtu.be/oRiQz7mOY6A [2] https://storage.courtlistener.com/recap/gov.uscourts.mnd.230...
ssl-3: Huh? I'm trying to learn here. I don't have a dog in this race. :)Suppose a buyer and myself agree on a contract for the production and purchase of 60,000 widgets of design C. Sometime later, they decide that they don't want design C widgets and insist upon design G instead. The buyer is in breach of contract -- not me.Now, changes do happen. Buyers (people, businesses, and governments alike) can and often do decide to go in a different direction. It's the kind of thing that happens every day.A new contract (or quite often, an amendment such as a change order) can be drawn up and -- if we can agree on the terms -- maybe I'll be producing design G widgets and everyone is happy. That also happens every day.But one party (even the military) can't just unilaterally alter the terms of the deal, and I'm not obligated to agree to the new change at all.At any given time, I can't be compelled to produce design G widgets unless I've previously agreed to produce design G widgets. That's illegal.(Unless it has been made legal. We've definitely legislated that before, such as with the Defense Production Act in WWII that forced manufacturers to produce things like military trucks instead of other things like civilian cars.But that definitely doesn't happen every day, and we aren't operating under those kinds of laws today as I write this in 2026. It can change -- and it can indeed change very rapidly -- but it has not yet changed.)
rodchalski: What's getting lost in the politics here is the technical question: how do you actually enforce a usage restriction on an AI model?Anthropic's redlines - no autonomous weapons, no mass surveillance - are policy statements. But once a model is deployed inside a customer's infrastructure, those redlines are essentially honor-system. There's no runtime enforcement. No cryptographic proof that a human authorized a specific action. No audit trail that connects a model output to an approval decision.This is the fundamental gap. You can write all the acceptable use policies you want, but without an authorization layer that sits between "model produced output" and "system took action," those policies are unenforceable at the technical level.The military already solved a version of this problem decades ago with chain-of-command and rules of engagement. The missing piece for AI systems is the equivalent infrastructure - a way to verify that every consequential action has a signed, auditable authorization before execution.Whether you agree with Anthropic's position or not, the fact that the entire dispute comes down to "we don't want our model used for X" with no technical mechanism to enforce that should concern everyone building with these systems.
Chance-Device: Yeah, but aren’t all of those things in service of “blowing people up”?
idiotsecant: Blowing people up is sometimes morally correct. Defending yourself against attack is very nearly always so.
bulletsvshumans: Either way, you can walk through the door after. Unless we reseal it legislatively. Resetting norms may be a lost cause at this point.
wrs: It depends on the door. Norms are one thing, but these folks went beyond norm violations quite a while ago. If someone is going to ignore the actual law and do whatever they want until the Supreme Court calls them on it, changing the law doesn’t help.Also, I’d like think that at this point “Trump got away with it” does not set a new norm.
blacksmith_tb: I don't get the impression Anthropic tried to walk anything back, they just balked when they were informed that the DoD wasn't willing to abide by their terms to refrain from using Claude for surveillance or straight up selecting targets to kill. But you're right that means Anthropic's hands are not exactly clean, just less bloody than OpenAI's or xAI's
6thbit: Is it? They already had signed with gov months ago and have been used by them actively in their Iran attacks. Whereas OpenAI just signed and deployed at this time.