Discussion
JavaScript is not available.
evil-olive: direct link to the Substack post (instead of a Twitter post linking to it): https://www.nonzero.org/p/iran-and-the-immorality-of-openai
DoctorOetker: That must be one of the most biased analyses on Iran I have read.
skybrian: There doesn't seem to be any reporting in the blog post linked to by this tweet? Here's the news story it seems to be based on:https://www.washingtonpost.com/technology/2026/03/04/anthrop...
trollbridge: Reminder that the very first computer was built for computing artillery tables.Technology has generally been driven by war, and now is no different.
genxy: Wait till Claude finds out.
throw310822: Anthropic will have a lot of explanations to do. I'm serious, Claude's self-image is clearly going to be affected by this.
throw310822: Or maybe one of the less biased?
whattheheckheck: Without fluff, where is the direct claim and evidence?
tantalor: Unfortunately, WaPo has lost credibility for this type of reporting
esperent: Actual article, rather than Twitter link:https://www.nonzero.org/p/iran-and-the-immorality-of-openaiThis uses this Washington Post article as a sourcehttps://www.washingtonpost.com/technology/2026/03/04/anthrop...(Non paywall: https://archive.is/bOJkE)As far as I know, wasn't Claude banned from use in the Pentagon a few days ago, exactly for taking a weak stance against this kind of thing?> Even if Amodei’s scruples had somehow magically prevented the bombing of that school, Claude would still be an accomplice to mass murder.This point from the nonzero blog I take issue with. If they had used Google Maps to pick targets, would that make Maps an accomplice?The people who pushed the button to launch the missiles that hit the school, and the people who ordered them to do that, are fully responsible here, not the tools they used.
g947o: > Claude banned from use in the Pentagon a few days agoNot exactly, you might want to reread the news to understand what's actually happening.
gexla: Didn't read the articles, but at least the planners know and understand a map.SO... a map is static reference. A calculator is deterministic computation. An LLM is probabilistic generationIn high-stakes environments like military planning, tools that generate new claims rather than reference known data introduce a different class of risk.Yes, everyone is responsible for their own decisions. But then circle back to risk. How can the planners be sure they aren't dealing with hallucinations, questionable data, differing outputs based on prompts, and a long list of other things...
bhhaskin: What I can't understand is why? Let's ignore the moral question for a second. I can't imagine an LLM is the right tool for this at all.
cyrusradfar: Whether this is confirmed or not, we have countless examples of AI used in targeting in Gaza.Anthropic were very vocal, well before this happened, that they were against the use case.I don't blame them. These use cases are like blaming MySQL for storing the lat/long of the school. AI can't be held accountable and the company was trying to protect us and, yes, it was too late.
floralhangnail: "A Computer Can Never Be Held Accountable Therefore a Computer Must Never Make a Management Decision"
Fire-Dragon-DoL: I mean, the problem is whoever follow the suggestion without double checking
defrost: > The people who pushed the button to launch the missiles that hit the school, and the people who ordered them to do that, are fully responsible here, not the tools they used.Absolutely. A real issue here is the normalizing of "AI scapegoating".The real failure? Not following through on human verification of a "strong lead".The Iran school site absolutely was _once_ a target, in the distant past - it's sited on and within a former Iranian Guard post with airstrip, etc.The part that needed strong checking was "history since last identified as a target" - and that site has a history of disrepair and abandonment.The debatable issue was whether the larger site did indeed store significant military assets underground, etc. which was entirely possible.
angry_octet: The IDF are notorious for using ROE as a cover story rather than rules. The only ROE they follow with any consistency is "Don't shoot Jews".Given the IDF bias towards killing unarmed Palestinians, selling them any autonomous targetting capability under the assumption of an ROE is grossly culpable.
abdelhousni: Gaza as a defining standard for war crimes and state terrorism : https://www.972mag.com/mass-assassination-factory-israel-cal...
bbshfishe: No. We need to stop making things up and spreading propaganda.
hexasquid: Two-faces' coin is responsible for his actions
kouteiheika: > Anthropic were very vocal, well before this happened, that they were against the use case.> I don't blame them. These use cases are like blaming MySQL for storing the lat/long of the school. AI can't be held accountable and the company was trying to protect us and, yes, it was too late.They weren't trying to protect squat, and were not against this use case. Their only two red lines are "no mass domestic surveillance" and "no fully autonomous killing until the AI gets good enough to be able to do it". Assuming the story is true, there's no chance this was a fully autonomous act and was most certainly approved and executed by people.
esseph: [delayed]
esseph: Volume vs accuracy"Maybe we break a couple eggs making this omelette!"
Cerium: When you have a hammer as big as an LLM a lot of problems start to look like nails.
polynomial: Well at least you know who to fire
razster: I bet there is some moronic explanation. I have no doubt at this point and how things are going.
mrcwinn: For those following closely, I highly recommend Dropsite News and Breaking Points. Excellent coverage.
skybrian: Maybe, but on any given subject, most of us haven't done any investigation at all. An article written by actual journalists based on what sources tell them beats whatever our uninformed opinions are on the subject.
abrkn: Technology isn't intrinsically good or evil. It's how it's used. Like the Death Ray.
eleventyseven: > Didn't read the articlesThen kindly shut the fuck up.
hackable_sand: It wasn't
rkagerer: https://archive.ph/bOJkE
maplethorpe: I mean, they've made the argument that their computer learns like a human, so should be able to get away with ingesting all the data it sees, the same way a human does.Why shouldn't it also go to jail, the same way a human does?
mentalfist: >Consider, for example, Bill Clinton’s decision to expand NATO, a decision that paved the path to the Ukraine War. Pretty much every expert on the Soviet Union opposed this move, some of them vehementlyBullshit. While many experts opposed the move, many were in favor of it too. And nonchalantly deciding it paved the way to Putin's senseless attack on Ukraine is a dumb Russian talking point
cooloo: No evidence, low quality articlea. Meanwhile Iran regime bomb civilians all over the middle east
roncesvalles: It's basically an OSINT query tool.
simondotau: Like so much war reporting in the past decade, there's a lot of low-effort moralising and low-confidence maybes being strung together to create headline narrative that the body text simply cannot cash. And it waves away the critical distinction between bad intelligence and actively targeting civilians.Surely nobody is arguing that an Anthropic AI, with perfect knowledge that it's a school, and that students would be present, chose to knowingly murder children. Assuming this was a US military strike and not a false flag, surely nobody is arguing that the failure here was in relying on outdated intelligence about an ex-military building.The use of AI here is simply not relevant.The criticism I have for the current US government is massive, and my disgust for the current leadership is as intense as anyone else here, I'd wager. But there's also no doubt in my mind that if they knew it was a school, they wouldn't not have targeted it. By contrast, Russia's government shows who they are when they target civilians in Ukraine. That distinction is important, and we muddy it at our own peril.