Discussion
When Is Technology Too Dangerous to Release to the Public?
buremba: It playbook is that a model is too dangerous until a competitor releases a competing model that beats yours.
make3: the thing could barely make full grammatical sentences, it's funny to see that even then they were overclaiming the fuck out of their myself
anuramat: imho it was more reasonable back then to claim "agi soon" -- back when nobody really knew how it scales
romanzubenko: I remember seeing this article and example output text and feeling what's the big deal?It wasn't until I got early access to GPT-3, that I though like something big is about to happen. At the time only a few companies/yc alums had access and I remember showing playground to people outside of tech, and my friend just kept asking "How does it know about my [x] domain? It it a trick?".
measurablefunc: At which point you tell them they are being extremely reckless but subtly mention that something new & even scarier is being developed internally that's going to blow everything else out of the water.
cinkhangin: I think they are right unintentionally. The growing amount of low-quality content everywhere could become a real problem.
ajsnigrutin: Now imagine all that low quality AI slop is being posted online and a new generation of AI will "learn" from it, output it's own version of AI slop, that will eventually end up online again for a new generation of AI to "learn" from.Something, something, idiocracy comes to mind.
JackYoustra: AI systems far weaker than GPT-2 have had terrible effects. The result of how information and power is distributed mostly flows along the lines of reward hacking recommendation engines, powered by even weaker models.And yet, somehow, it is beyond disagreeable but unbelievable that other people may have and may still reasonably believe that these things are too dangerous for widespread release?
measurablefunc: I'm wondering when people are going to figure out the doom marketing playbook.
villgax: Very safe to use the outputs to make a better model coz scraping the internet for publicly accessible content means your publicly shared outputs only become part of the same lol
selcuka: It was a 1.5B parameter model. It was still impressive for 2019, but yeah, it was nothing to worry about.
nsmog767: Zero mention of Sam Altman…interesting
JumpCrisscross: Had a minor conniption until I saw the year. OpenAI just struggled to close a round. And the New Yorker just published an unflattering profile of Altman [1]. So it would make sense they'd go back to the PR strategy of "stop me from shooting grandma."[1] https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
52-6F-62: > "stop me from shooting grandma."That is the most succinct manner I've seen this whole thing put.
sam1r: Absolutely agreed. I am using this term on overdrive the next 24 hours.
bertmuthalaly: Now that I see this in the light of the recent sama article, I wonder whether the point of the "it's too dangerous" rhetoric is to enable "Open" AI to avoid open-sourcing the weights and process.A convenient pretext for maintaining a monetizable competitive advantage while claiming a benevolent purpose.
subroutine: They finally did release 2.0 under the MIT license. That was the last version (a 1.5-billion-parameter model) they would release open source. GPT3 for comparison has 175 billion parameters.
nradov: Lol. The vast majority of content has always been low-quality. Those who believe that things were better before LLMs have selective memory.
Groxx: [delayed]
bitwize: This leads to a well-documented phenomenon known as model collapse. You know how if you blur and sharpen an image repeatedly you eventually end up with just a rectangle of creepy, wormy spaghetti lines? You lose information on each blur, and then ask it to reconstitute the image with less information on each sharpen, until there's nothing recognizable left.Training a model is like the blur and generating from that model is like the sharpen. Repeat enough times and enough information is lost that you're just left with "wormy spaghetti lines"—in an LLM's case, meaningless gibberish that actually pretty closely resembles the glitchy stuff said by the cores that fall off GLaDOS in Portal. I dunno, you read the paper and be the judge:https://www.nature.com/articles/s41586-024-07566-yTo jump to the last output sample, C-f Gen 9Of course you may be talking about the human aspect of this. Gods willing, we'll realize that our LLMs are spewing gibberish and think twice about putting them in all the things, all the time. But the scenario I fear isn't Idiocracy—it's worse: a community of humans who treat the gibberish as sacred writ, Zardoz style.
SilverSlash: Someone needs to make a compilation of all these classic OpenAI moments. Including hits like GPT-2 too dangerous, the 64x64 image model DALL-E too scary, "push the veil of ignorance back", AGI achieved internally, Q*/strawberry is able to solve math and is making OpenAI researchers panic, etc. etc.I use Codex btw, and I really love it. But some of these companies have been so overhyping the capabilities of these models for years now that it's both funny to look back and tiresome to still keep hearing it.Meanwhile I am at wits end after NONE OF Codex GPT-5.4 on Extra High, Claude Opus 4.6-1M on Max, Opus 4.6 on Max, and Gemini 3.1 Pro on High have been able to solve a very straightforward and basic UI bug I'm facing. To the point where, after wasting a day on this, I am now just going to go through the (single file) of code and just fix it myself.
DougMerritt: > I am now just going to go through the (single file) of code and just fix it myself.That's front page news, in this era.
strangescript: Their concerns weren't completely off base, I think they just over estimated how much it would really matter in the grand scheme.
SpicyLemonZest: We got extremely, extremely lucky that society is as resilient as it's proven to be against fake news. I don't think very many people predicted that it simply wouldn't matter when photorealistic compromising images of whoever you don't like became available for $5.
apical_dendrite: I have a lot of trouble understanding the mindset of a person who thinks that what they're building is so dangerous that it must be locked away or it will cause untold harm, but also that they must build it as fast as possible.I can understand it in the context of the Manhattan project, where you're fighting a war for survival. I cannot understand how you can do it as a commercial enterprise.
renewiltord: No. That’s not true. https://huggingface.co/openai/gpt-oss-120bWas released after.
saltyoldman: > I am now just going to go through the (single file) of code and just fix it myself.You can't it's all vibed, you'll face the art vs build internal struggle and end up re-coding the entire thing by hand.
raincole: They were more than right. They were correct in an intentional, precise manner. This is what OpenAI actually stated[0]:> Synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns.> ‘The public at large will need to become more sceptical of text they find online, just as the ”deep fakes” phenomenon calls for more scepticism about images.It ended up just like that.[0]: https://metro.co.uk/2019/02/15/elon-musks-openai-builds-arti...
johnfn: Yeah, I find it a bit odd how at the time everyone was pointing and laughing at OpenAI for being obviously wrong about this. Now in 2026, AI slop is very obviously a serious problem - it inundates all platforms and obscures the truth. And people are still saying OpenAI in 2019 were wrong?
Grimblewald: Both crowds are right because two messages were spread. The researchers spread reasonable fears and concerns. The marketing charlatans like Altman oversold the scare as "Terminator in T-4 days" to imply greater capacity in those systems than was reasonably there.The problem is the most publicly disseminated messaging around the topic were the fear mongering "it's god in a box" style messaging around it. Can't argue with the billions secured in funding heisted via pyramid scheme for the current GPU bonfire, but people are right to ridicule, while also right to point out warnings were reasonable. Both are true, it depends on which face of "OpenAI" we're talking about, researchers or marketing chuds?
SilverSlash: I understand how laughable that sounds when I say it out loud. But the reality is, when I'm in a state of 'Tell LLM what to do, verify, repeat', it's really hard to sometimes break out of that loop and do manual fixes.Maybe the brain has some advanced optimization where once you're in a loop, roughly staying inside that loop has a lower impedance than starting one. Maybe that's why the flow state feels so magical, it's when resistance is at its lowest. Maybe I need sleep.
chris_va: I think people today are more focused on how OpenAI released a model "too dangerous to release", not that they were right or wrong, as part of the general trend of criticizing OpenAI for not following any of its stated principles.
cal_dent: AI can/will be both incredibly revolutionary and also a grift....
mrcwinn: It's this crowd having it both ways. The default desire is to dunk on AI, however inconsistent the arguments.
TehCorwiz: The quality hasn't changed. The volume has. It used to take real human time to create garbage. There was value in that. Someone though "Hmm, what worthless thing can I do? I know! I'll make people online mad." And then they spent the time getting someone else's goat. It was great. A good balance, spreading lies took some minimum effort. Now we have automated garbage. And the flavor of the garbage is: gaslighting people with an illusion of community. We've empowered the trolls with an infinite meme-o-rater while ignoring the real human time spent unwillingly sifting through the ever increasing pile of worthlessness.The world does not have to get worse. We're letting it though.
monkpit: > We're letting it though.It would be nice if “we” had anything to do with it. Just think about the next campaign trail for any superpower, it’s going to be a disaster of fake news and slop coming from all over the globe.
cinkhangin: Maybe that's true, But I think before LLMs became common, people had more distinct ways of expressing themselves, low-quality for not. Now, a lot of online writing feels uniform and I think that is worse.
PaulShomo: What a blast from the past. You have to take yourself back in the ol' time-machine to remember that 2019 mindset. People were probably still reeling from a few years prior when the Microsoft Tay bot made news for soiling twitter with naughty tweets.
cat5e: Is anyone keeping a history of this AI "summer"? I'm sure the timeline would be very amusing.
rain-princess: I told my manager I wrote my code line by line (most of it) in a check-in. I showed him @author my name, and we laughed for a bit.But I think that is the best way to have a clear mental model. Otherwise, no matter how careful, you always have tech debt building and churning.Also they really suck at UI bugs and CSS. Unit test that stuff.
Sunspark: The current "too dangerous" hype today is Anthropic's Mythos. They say it is so mighty that they will wall it off and only grant access to approved corporations.
ModernMech: Ah yes, corporations, famously the right hands to wield mighty weapons.
Sharlin: I think it’s called "sunk cost fallacy".
Zetaphor: They don't need an excuse to not open the model weights (unfortunately). As far as I know the only western lab to release weights of a former flagship model is xAI with Grok 2. They said they were going to do the same for Grok 3 but nothing so far.They have no obligation to do any open releases, it's just good PR for recruitment, fundraising, and devrel
derangedHorse: I had a problem that required a recursive solution and Opus4.6 nearly used all my credits trying to solve it to no avail. In the AI apocalypse I hope I'm not judged too harshly for my words near the end of all those sessions lol.
jeswin: > a very straightforward and basic UI bugShow us the code, or an obfuscated snippet. A common challenge with coding-agent related posts is that the described experiences have no associated context, and readers have no way of knowing whether it's the model, the task, the company or even the developer.Nobody learns anything without context, including the poster.
SilverSlash: That's hard to believe in my case. I tried a variety of prompts, 3 different frontier models, provided manual screenshot(s), the agent itself also took its own screenshots from tests during the course of debugging. Nothing worked. I have now fixed the bug manually after 15-20 minutes of playing around with a codebase where I don't know the language and didn't write a single line of code until now.
jcstryker: This marketing strategy is getting tiring, every model is more dangerous than the next...Playing on fear instead of the bright future you are opening up for us all is not the feeling I would want to leave the public with
bluefirebrand: The fact that they knew they were shitting in the public well and did it anyways pisses me off. What colossally selfish assholes.Hang them all.