Discussion
The Closing of the Frontier
keybored: > The Anthropic Mythos announcement is the first time in my life I’ve felt truly poor. Maybe because I grew up on the internet and it was the one permissionless place where you could have leverage and a shot at uncapped exploration and ambition. That is now changing with the gap between models that are publicly available vs those reserved for the already wealthy and pre-established.The Internet was developed by the US state sector and handed off to the private sector in the 90’s. Then it worked as an open space until it didn’t any more. Predictably driven by corporate interests.> In 1893, Frederick Jackson Turner argued that much that is distinctive about America was shaped by the existence of free land to the West where anyone could start over, and that this condition infused America with its characteristic liberty, egalitarianism, rejection of feudalistic hierarchy, self-sufficiency, and ambition.A more asinine comparison could not have been picked.
derektank: Most important point in the piece (though I’m not sure if the historical analogy to the grid holds, given local electricity production has been unavailable for the majority of the history of the grid)> You can generate your own electricity with a solar panel (think local models), but most people would rather pay a utility bill. And the power company doesn’t decide, on the basis of pedigree, who is worthy of electricity. Intelligence should work similarly, where the capabilities you can access scale may scale with vetting and due process, but the presumption should be access. Add safety guardrails to restrict dangerous use; start by making them overly trigger-happy if you must, and calibrate over time. But the default should be to allow entry.
cortesoft: This feels really premature. The announcement was a week ago. The “this model is too powerful for the general public” sounds like marketing to me.Give it a few months and it will be just another model they are selling, but the NEWER model is just too powerful for the public.
bluefirebrand: Honestly the mere idea that there even could be a model that is "too powerful for the public", as in a model that will be kept in the hands of the powerful to leverage against the public, should make people furiousHow much more obvious can these companies make it that they loathe us and want to keep us down however they can?
wyc: > A 16-year-old with no credentials and no capital could just do things. The world of bits offered the freedom to build without being drowned in arbitrary constraints, in a way that didn’t require assembling vast capital or prestige or connections, where your creativity and work could speak for itself, and you had agency. This is now truer than it ever was.
OldGreenYodaGPT: The problem is Anthropic doesn't have the compute to deploy this model to scale to everyone yet. Dario didn't believe that they needed as much compute, OpenAI is going to have much more compute unlocked this year and especially next year.
fwipsy: Security researchers always having a model one generation newer than the general public would still achieve the stated goals.
Ancalagon: The point is it won’t be if these new models stay locked to the public.
bredren: We saw yesterday that expert orchestration around small, publicly available models can produce results on the level of the unreleased model.I take a contra view and instead see this as fuel on the fire for tinkering to squeeze advanced functionality out of more available things.It has always been like this, the amateur improvising tooling and equipment to outdo companies with comparably infinite resources.
shevy-java: > Even though the American dream is nearly deadThat dream was always a lie. But in the past, people could purchase more in parity. You only need to look at income versus housing cost in, say, Canada.Realistically there should not exist any superrich, but this seems hard to change. That means there needs to be a different society be given as promise. Other countries manage that. In the USA they have the orange oligarch who said a while ago how there is no money for health care because he has to invade countries and wage war. So much for the "no more wars" promise.
fwipsy: > A 16-year-old with no credentials and no capital could just do things.Yes, but today a 16-year-old building a unicorn is about as likely as winning the lottery.The opportunity from the early days of the American frontier is not typical. Instead, it's the brief burst of unrestrained growth as a better-adapted organization (the US, software companies) expands into, and expands, a niche--cannibalizing the previous occupant (Native Americans, older stagnant companies.) At times growth is so rapid that individuals are able to advance the frontier, but eventually they will be replaced by organizations.So, opportunity for individuals comes from disruption. Creative destruction is good up to a point, but it results from advancing capabilities. Technological advances compound and accelerate exponentially. Eventually we reach the point where any malcontent can destroy the world by snapping their fingers. At some point we need to place restrictions on the capabilities accessible to individuals. We have reached that point with nuclear weapons, and I think it is sensible to believe that AI is reaching that point as well.
hn_throwaway_99: I feel like "this model is too powerful for the general public" was really just the equivalent of responsible disclosure, with the "too powerful" bit just a positive marketing spin like you say.That is, Mythos will make it much easier to find lurking zero days, so just like responsible disclosure requires a security researcher to notify the software author first and give them some time to patch, giving critical infrastructure folks at least some time to analyze and patch systems seems reasonable to me.
zkmon: You never needed a godzilla or a megatron to get on with your life. But the sellers of those monsters would make every attempt, in connivance with the authorities, to make it a basic necessity to use their services. That's a survival strategy for the monsters. The owners can't keep the monsters in cages for too long, even if the owner is a state actor.
simianwords: While I don't think Mythos is so powerful that it justifies containment permanently - I wonder how it might work when there is such an AI that can justify containment.What if this new model can start proving Millenial problems and provide insights in other fields that was not possible before?My intuition says that a model that is as good will also be equally well aligned -- but it is still highly risky to give it to the general public because all you need is one jailbreak from bad actors.At that point I think society would change so dramatically that "access to general public" would be a non issue. Rather, time would be spent on making abundance happen - you might think of the political struggles, economics and new ventures.Its a bit sad that democratised access is not provided because of negative sum possibilities like cybersecurity.
Flere-Imsaho: Back in 2019, OpenAI delayed the release of GPT-2, stating "fake news generation and potential misuse". I guess we have those in buckets now?
lowbloodsugar: In other, underreported news, companies like AirBnB are using open source models. Anthropic and OpenAI have a six months to year advantage over Qwen and other models. We reached the point a while ago where Anthropic models where good enough, and so now, inevitably, we’ve reached the point where open models are good enough. The boasting of models so good they can’t be shared was propaganda to frame the conversation. But for anyone paying attention, what matters is that open models are now good enough.
jabedude: This happened before with GPT-2 being touted as "too dangerous to release"[0] at the time by OpenAI. I don't think that means every model will be safe to release in the future, but nothing I've read about Mythos seems like it's going to be different this time.[0]: https://openai.com/index/better-language-models/
operatingthetan: I doubt that Mythos is just so wonderful and not able to be replicated that we are actually being "closed off" from frontier models.For example, the people who Anthropic "trusts" with this "dangerous" model are a handful of fortune 500 companies? Seriously? Those are the people we trust?We are going to have access to this within 6 months, and if we don't, someone else will offer an equivalent. Anthropic hasn't walked to the edge of the abyss only to be like "let the CEO's handle this!"
andyfilms1: It has always baffled me how quickly, and how voraciously, people started to rely on privately owned AI systems.AI is not something discovered by scientists and plucked out of the ether. It's engineered and controlled, for profit, by corporations which have demographics and KPIs. These companies don't owe you anything, and they make no promises.If you're running a business that deeply relies on AI, you might as well add Sam Altman to your board of directors--because he has just as much control over your company as you do. If they have a bad quarter and need to increase rates by 1000%, your choices are to pay up or shut down.This Mythos situation is just the beginning. Not only do they have everyone hooked, but they've actively stalled the personal skill growth of millions of people who fell into vibe-coding rather than genuinely learning. And now they have that choice: Pay up, or shut down.
enraged_camel: >> We saw yesterday that expert orchestration around small, publicly available models can produce results on the level of the unreleased model.This is false. Yesterday's article did not actually show this, and there are many comments in the discussion from actual security people (like tptacek) pointing that out.
CuriouslyC: The irony is that we've just shifted the complexity. Anyone can make something now, but since everyone is making things, now you need to compete on reach/distribution more aggressively. The new "capital" is social media juice and pre-AI rep. Same problem, different skin.
rdedev: > already happening with recursive self improvementAre any AI labs claiming this?
hgoel: I interpreted their "too dangerous to release" comment as a statement about the current situation. If the model is truly as capable as they say, and the security issues so numerous, it makes sense to hold out until the biggest targets have been patched.It'd only take one company deciding to not worry about safety, to change the calculus back to "we have to release this to stay competitive".
hn_throwaway_99: I admit I didn't read the entire post (I honestly think authors really need to come to terms with the fact that we now live in a world of information excess, and pithiness is more important than ever), but I wouldn't feel too bad yet given there was a recent front page HN post about how free, open models could actually catch all the issues Mythos did, it just required a little more orchestration. E.g. see https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jag... for a detailed analysis.
airza: if an 1800 word post is just too long I think you are cooked. This is the nicest thing I can say on the subject.
hn_throwaway_99: It's not that 1800 words is too long, it's that I've seen probably 40-50 (at least) posts, analyses, and bloviations about Mythos since it came out. If the author doesn't very quickly get to why I should read their particular 1800 words over the other similar and competing tens of thousands of words on the subject, they are "cooked".
operatingthetan: Open gemini chrome sidebar, type "sum." Watch the magic happen.
skybrian: It's long been conventional wisdom that you shouldn't write your own crypto libraries - leave that to experts. But excellent open source libraries are available, which do get reviewed by experts. And if you're willing to study, maybe you can learn enough about cryptography to become one of the experts?I'm wondering what other security-sensitive software that might become true of in the era of Mythos-or-better AI's?There will still be open source projects that anyone could learn enough to contribute to, but maybe starting from scratch and writing your own becomes less feasible if you aren't attracting enough attention to get attention from people with access to the best AI's?For example, Linux patches are going to get expert reviews, but maybe your homegrown OS won't?
avaer: Whatever is in Mythos will be open source in 6mos-1yr tops. You might not have the GPUs but you won't be locked out of the capability.We're still not at the point where one person with a coding agent can max out their salary in effectively using credits, so the capability is still well within reach of the vast base of the industry. Meaning most people who want to pay a product (which I still think is pretty reasonably priced for what it does) will be able to get the product.For now the economics will make sure of that. The market is ripe for someone basically copying the likes of Mythos and pricing it competitively.
atleastoptimal: I think we are going to look at the era between 2019-2025 as a very rare blip in the history of public AI access. Regardless of whether fears about Mythos end up being justified, the clear trend is1. AI models are becoming better and better at causing massively disruptive effects, leaving up larger and larger liabilities, especially as laws and regulations are being passed/proposed which would put the responsibility of some mass disruption/hacking event on the company which serves the model that made it possible2. The relative advantage of serving an AI model for inference in exchange for money is waning compared to the advantage of using that model internally for purposes which accrue money/power/leverage for that AI company. Why serve a model at 30 dollars/million tokens when you've discovered you can use that model to run a simulated Quant firm with a net profit of 300 dollars/million tokens? Why offer the model to companies so they can find zero-day exploits, when you can find them yourself and sell the discovery to companies which would may millions to avoid this exploit being taken advantage of?3. Why serve models so another wrapper company like Cursor can make billions off your tokens, and then try to train their own models as fast as possible, trained on your outputs so they aren't dependent on you? The entire AI startup industry and like 90% of YC batches depend on being able to serve frontier models at a profit, mediated through some wrapper, why can't OpenAI/ANthropic, once their models are good enough to handle the ideation/organizational problem, become their own incubator for thousands of AI run startups, running on models way better than the public has access to?As a consequence, there is less and less incentive over time to offer models as an API to the public.
operatingthetan: > AI models are becoming better and better at causing massively disruptive effectsAnthropic chose to use their model to find a bunch of vulnerabilities. People have used much smaller models since to find the same issues. We are being set up to have certain pre-concieved notions about this model.Ripping away AI access from the public at this point would be catastrophic for the world economy. It's just not happening.
avaer: This is the real reason it's not out. But it's great that they were able to spin their lack of resources into an abundance of benevolence and concern for the proletariat.
oofbey: Anthropic marketing is working very well. They are strongly incentivized to say their model is too powerful to release even if it’s not. It’s almost standard practice these days.
p1esk: your choices are to pay up or shut down.Another choice is to switch to a different model, perhaps open source one this time.
kay_o: Another choice is to write code and learn. Especially if you are 16 and have all day.
mike_hearn: Probably not. "This model is too powerful for the public" can also be interpreted another way, which they've also strongly hinted at - the cost/benefit ratio of the upgrade is negative for the vast majority of all users. Finding vulnerabilities is one of the few cases where it makes sense to use it.Their writing about the model so far does say this is an issue where, for instance, you can't really use Mythos for interactive coding because it's so slow. You have to give it some work, go home, sleep, come in the next day and then maybe it'll have something for you.All the AI labs and startups are still losing money hand over fist. Launching Mythos would require it to be priced well above current models, for a much slower product. Would the majority of customers notice the difference in intelligence given the tasks they're setting? If the answer is no, it's not economic to launch.Really, I'm surprised they've done Mythos. Maybe they just wanted to exploit access to larger contiguous training datacenters than OpenAI, but what these labs need isn't smarter models, it's smaller and cheaper models that users will accept as good enough substitutes (or more advanced model routing, dynamic thinking, etc).
simianwords: This is so untrue - prestige and connections matter even more, now that work is being commoditised. no?
avaer: It seems to follow that when the world is full of slop artists real artists don't get taken seriously/can't get a job.The Elons/FANGS are generally doing fine though.
tsunamifury: It’s the same as it always says.You could always take the time to do something or pay someone else to do it.You pay others to focus on things you can’t.Unless myths fully does that (which I say in full confidence that it doesn’t) it’s just making it cheaper to provide focus.
margalabargala: That's how I'm reading this too. They've made a (much) better metasploit/shodan all in one.If you make a better vulnerability scanner and find a bunch of vulnerabilites, you should try to get them fixed before making all the results public.
camillomiller: The richer are just getting scammed faster this time around
tim-projects: Anthropic should provide a specific service where they attack a businesses infrastructure using this frontier model and then issue a report of all vulnerabilities found. I could imagine it would be quite lucrative.Much better than hiding it away where it can't help anyone.
avaer: Hypothetically... I'm joey joe bob who happens to be maintainer of a top 10 npm package. My wife got cancer so I need money FAST. Unrelated: can you mythos my lib?1 month later: whoopsie my lib got hacked and the hackers stole a bunch of stuff. Sorry guys.It just wouldn't be good PR. And this is the best case scenario.
DrProtic: Let’s say the model is indeed groundbreaking but needs much more compute that they have, I wonder would merging Anthropic and OpenAI be on the table.
Forgeties79: I didn't realize tokens were free!Jokes aside, this is just a different flavor of the same promise we see with each new technology, and 9/10 times it just ends up in worse professional environments.
layer8: If the model isn’t worth the cost for those who might want to make use of it, then it can’t be that impactful either.One thing to compare to would be what’s been paid for bug bounties in the past.
psychoslave: It's not clear to me if the author talk about European invasion as the colonization pattern behind pretended American frontier, as it was lands that never any human had reached before.
operatingthetan: It's going to be a slightly better Opus. Every model released by any provider since 4o has been a modest improvement but over-hyped. Opus 4.6 included.I believe they are starting to split hairs and the primary lever left is adding compute.
enejej: Yeah the only thing that will be left is to scale up compute and pray it creates escape velocity. Which frankly has been Sam’s whole thesis in raising money.
contraposit: I can't run my business without electricity. Yet we don't fear of its access being revoked. Sam makes the comparison of intelligence to electricity a lot. So we are on the path to these systems becoming utilities.
wonnage: They’ve been shilling the same statement since GPT-2
ajross: Bug bounties don't reflect the market impact of the vulnerability though, just the amount needed to incentivize white hats to do research they wouldn't otherwise (or that they would target to other platforms that pay higher bounties). You need to look at market prices for zero days on the black market to get closer.
notpachet: They could just be writing for themselves, or their friends, or for people with the patience to read. You are making assumptions about how badly they want to reach your particular eyeballs. They might not care about trying to win over people with a minimal attention span as much as you think they do.What makes you think your comment was worth reading?
p1esk: We are talking about running a business. In business world no one ever cared where code is coming from, the only concern is how much the code costs.
anematode: Electricity is heavily regulated. Is there any evidence that LLMs will be the same?
vector_spaces: > pithiness is more important than everI apologize for getting stuck on your parenthetical but while pithiness is a fine aspiration in a North American business setting, pithy reads generally can't exist without more detailed and nuanced long-form analyses, and the latter face a more dire existential threat. You are right that pithy [writing] is an important skill, as are slow and deliberative reading and writing of longer form workI'm not claiming the original post is detailed or nuanced, to be clear
slopinthebag: Underrated comment. I think the future is actually quite bright, if we can continue to use open models, even if they are behind closed SOTA models they are still capable and will continue to improve.
contraposit: On the efficiency side, why do we need 10 companies doing the same thing. "Don't Repeat Yourself" principle would agree with your proposal.
bombcar: There’s a drain clog clearer sold in a jug like all the rest. But they wrap the jug in a thick clear bag. The implication is clear - this stuff is so powerful it’s extra dangerous.It’s the same stuff inside as all the others.