Discussion
Exclusive: AI Error Likely Led to Girl’s School Bombing in Iran
readitalready: I bet Anthropic has logs of whatever prompt was used to determine targeting the girls elementary school and can find out if they were directly responsible for their deaths through AI.
michaellee8: I suppose they are vibe-targeting now
WarOnPrivacy: "The immediate theory is that the AI program included the school’s position based on older, archived intelligence. The logic behind the launch, and the mechanics of who authorized it is unclear." Is this a "We don't know what's in the black box" scenario?
muddi900: It is a way to absolve responsibility
EFreethought: Companies are spending billions so they can say the dog ate their homework.
giacomoforte: This AI risk is the same as with Tesla's Autopilot a couple years back...People believe for some reason that the AI is 99.99% correct and the warning not to trust it too much is just legalese.
lysace: It’s an online-only local news site for Worcester, MA. There is literally no information on the site about them.
lelanthran: This resolves nothing.The one side will claim unintentional target scapegoating the AI.The other side will claim the AI is a scapegoat.Both sides can be true at the same time, regardless of whether it was intentional, AI error or human error.
lukeschlather: So the day after the Trump admin blacklisted Anthropic for insisting they not use AI to make kill decisions, the Trump admin uses Claude to kill a bunch of schoolgirls in exactly the way Anthropic was warning them would happen. And now Anthropic is the supply chain risk for being the only people involved with the slightest bit of sense.
walletdrainer: Wasn’t Hegseth pretty clear that the logic behind murdering Iranian children was to prove that he isn’t “woke”?
readitalready: Indeed. We all know, just based on his proclamation of support for terrorizing the Iranian people the other day, that the US is perfectly willing to target and kill entire schools full of children.Killing 150+ elementary school girls was no accident. Do not let them get away with claiming it was an "error". They purposely targeted an elementary school filled with little girls to kill them.
moogly: Easier and cheaper to just say "they were Hamas/Hezbollah".
dodomodo: The statement is complete speculation, there is no proof at all. And mistarting was a thing way way before as
anonymouscaller: Funny how Anthropic's press team has been working overtime to ensure the public they're the AI on the right side of history, yet that's anything further than the truth...
tdeck: When you choose to serve the American military, knowing both its history and the fact that it's been facilitating at least one genocide over the past few years, you can't just claim "we didn't want anything bad to happen".
stevenhuang: What's up with sites detecting adblock and popping up modals so you can't even interact with the page anymore?Firefox and Chrome on Android.Guess that's hint enough that this outfit is garbage and not reputable. Flagged and added to domain block lists.
zarzavat: This is not an "AI error". This is a human decision to use known unreliable AI for waging war, in full knowledge that civilian death is an inevitable consequence.If you decide your strike locations using a pair of dice, it's not a "dice error" when you blow up a school.
b00ty4breakfast: the US military has never had any real qualms about murdering innocent civilians; they've just had a problem admitting it out loud to the public.You'll recall the multiple wedding parties that were massacred by drones during the Obama administration, or the Winter Soldier testimonies from Iraq and Afghanistan ca 2008 or the original Winter Soldier investigation in 1971 or the infamous My Lai massacre ca 1968.Hegseth is skipping the normal ritual of denial and fake regret but this event is firmly within a well-documented lineage going back decades.
slopinthebag: The final boss of "I don't look at the code"
thefz: The final boss of "I don't care about nonwhite human lives"
palmotea: > People believe for some reason that the AI is 99.99% correct and the warning not to trust it too much is just legalese.That "some reason" is science fiction plus some modern-day hype. The sci-fi trope of AI is that it's something more intelligent and perfect than any human (e.g. Data from Star Trek or even HAL, despite his malfunctions), and the people who are selling LLMs are happy to let influence people to over-estimate LLM capabilities.There's no sci-fi model for subhuman and kinda crappy generative AI.
parvardegr: Now the politics mix with everything
OutOfHere: This website doesn't even load correctly.