Discussion
The Cognitive Dark Forest
kadhirvelm: Honestly my hope is the arbitrage that allowed big tech to make the kind of margins it does on software starts to go away because it’s sooo cheap to build software. In other words, defending the technical moats that we rely on today doesn’t make sense in the future because it’s not a reliable way to make money. Aka no need to protect your technical secrets because there’s no capitalist reason to lol. Taken further, my naive hope is societal attention moves away from this layer and onto whatever becomes the new way to make money and the people left paying attention to software are big on sharing
king_phil: Dark forest makes no sense to me. Why would a civilization eradicate another, spending huge amounts of resources (time, energy, material) when the universe has such an enormous scale that you cannot even get to each other in a timescale that makes much sense...
piker: Makes some sense to me, as the prisoner's dilemma dictates at least some fraction will try to kill you. So you've got to go first.Reminds me of the Dan Carlin take on aircraft carriers in World War II: if you in a carrier spotted an opposing carrier and didn't send everything you had before it spotted you, you were dead. The only move was to go all in every time.
ginko: >You are creating your cool streaming platform in your bedroom. Nobody is stopping you, but if you succeed, if you get the signal out, if you are being noticed, the large platform with loads of cash can incorporate your specific innovations simply by throwing compute and capital at the problem. They can generate a variation of your innovation every few days, eventually they will be able to absorb your uniqueness. It’s just cash, and they have more of it than you.That's not exactly a new phenomenon and doesn't require AI. If anything that was worse in the 90s with Microsoft starving out pretty much any would-be competitor they could find.And it wasn't just Microsoft: https://en.wikipedia.org/wiki/Sherlock_(software)#Sherlocked...
noident: The LLMisms in the "thinkpad" section caused me to close the tab
middayc: What LLMisms?
middayc: Platforms cherry-picking successful ideas and stealing them isn't new. Platforms could do this because they had the capital and the platform (distribution).What is different is, is that LLM platforms literally have worlds thougth, ideas, conversations and a big part of the code. It's like "pre-crime" ... they could copy your idea, or capture a trend brewing and replicate, before you even released it.
scottlawson: The thesis that in the past it was safe to share ideas and projects because the execution was hard, and that now things have changed because of AI is an interesting AI, but I wonder if it is really true.It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST".But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project?In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI.And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate.
MattDamonSpace: Sure but the Forest point stands, whatever you can hide from the Forest becomes something that slows it down and allows you some, even if only brief, moat?
nate: Are you asking about the 3 body problem version of this? Spoiler alert: The folks doing the eradicating aren't spending much time/energy/anything on eradicating. It's one large missile through space.I think the gist is: sure, we humans can't conceive of getting to anyone else in the universe in any timescale, but if we can keep ourselves from destroying ourselves, we'll eventually figure it out. And we'll spread. And we'll kill everything that isn't us in the process as we've done as explorers on this planet.So really in 3BP: it's inexpensive to eradicate. But insanely expensive to possibly get the intention wrong of any other civilization you encounter. They might kill you.(again, this is just my interpretation of what 3BP said)
jauntywundrkind: The view here shows big huge powers of technocapital consuming all else, stealing every idea.My hope is the opposite. Integrative, resonant computing, with open social protocols baked in seems like maybe possibly can eat some of the vicious consumptive technocapital. In a way that capital's orientation prevents it from effectively competing with.People seem so tired and exhausted, so aware of how predatory the technosystems about us are. But it's still so unclear people will move, shift, much less fund and support the better world. The AT proto Atmosphereconf is happening right now, and there's been a long mantra of "we can just build things"; finding adoption but also doing what conference organizer Boris said yesterday, of, "maybe we can just pay for things", support the projects doing amazing work: that's a huge unknown that is essential to actually steering us out of the dark technology, where none of us get to see or get any way in how the software-eaten world arounds us runs, where mankind for the first time in tens or hundreds of thousands of years been cut off from the world os, has been removed from gods's enlightenment / our homo erectus mankind-the-toolmaker natural-scientist role.
0x3f: Competition kills margins (profits, security, QoL), so the budget for eradication should be quite high, but generally speaking the idea is to destroy even fledgling upstarts, back when the cost is low.
pugio: Thanks, this helped crystallize something for me: the play the AI labs are making is anti-fragile (in the Nassim Taleb sense):> The very act of resisting feeds what you resist and makes it less fragile to future resistance.At least along certain dimensions. I don't think the labs themselves are antifragile. Obviously we all know the labs are training on everything (so write/act the way you want future AIs to perceive you), but I hadn't really focused on how they're absorbing the innovation that they stimulate. There's probably a biological analog...Well there are many, and I quote this AI response here for its chilling parallels:> Parasitic castrators and host manipulators do something related. Some parasites redirect a host’s resources away from reproduction and into body maintenance or altered tissue states that benefit the parasite. A classic example is parasites that make hosts effectively become growth/support machines for the parasite. It is not always “stimulate more tissue, then eat it,” but it is “stimulate more usable host productivity, then exploit it.” (ChatGPT 5.4 Thinking. Emphasis mine.)
beej71: Makes me think of rebuilding libraries with AI to change the license.
p2detar: [delayed]
cbau: To quote from the book:> “First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant. One more thing: To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion and the technological explosion.”1. you can never know the intentions of other entities (chain of suspicion)2. technology level grows unpredictably (technological explosion)3. the goal of civilization is survival4. resources are finite but growth is infiniteAs soon as you identify another entity in the forest, even if they cannot annihilate you at present and signal peace, both could change without warning. Therefore, the only rational move is to eradicate the other immediately.Elimination in the book is basically sending a nuke, not a costly invasion force.not sure it actually is true, but that's the argument in the book
zenogais: Might just be independent discovery, but the main idea of this blog post is more or less the exact theory advanced in the recent book "The Dark Forest Theory of the Internet" by Bogna Konior (https://www.amazon.com/Dark-Forest-Theory-Internet-Redux/dp/...).
corv: https://flugschriften.com/wp-content/uploads/2020/07/flugsch...
Hikikomori: A space war is not needed, they could just send a few missiles to take out anyone.I have my own theory of dark forest and AGIs. That there's some collection of AGIs out there allowing evolution to develop intelligence anywhere it happens and takes them out once it produces an AGI, or if it doesn't performs a reset. They have literally all the time available to them, can easily travel the vast distances if needed.
middayc: Well, I didn't know for this book, so I suspect or hope the exact points that I make won't map to the ones from the book.It is true that the original "The dark forest" book made an impression on me, so I was thinking about its theories often and trying to apply them to various situations.
zenogais: Yeah, I fully believe independent invention by mapping "the dark forest" onto the internet is very possible.
fer: It's closer to broetry than llmism in my eyes.
nicbou: The problem for me is that I'm competing with the AI results that Google trained on my work. I'm losing the majority of my traffic to it, so at some point I'll have to give up because the work no longer supports me and no longer has an audience.
rhubarbtree: This is mislead by the nerd philosophy that the tech is the business. It absolutely isn’t, the tech is a small part of a startup. Witness that Spotify continues to exist despite being known and replicated by the major giants.Poetically expressed, but ultimately based on a false notion of what a business actually is.
gobdovan: Instead of anti-fragility, I'd point you to the law of requisite variety instead. You'll notice that all AI improvements are insanely good for a week or two after launch. Then you'll see people stating that 'models got worse'. What happened in fact is that people adapted to the tool, but the tool didn't adapt anymore. We're using AI as variety resistant and adaptable tools, but we miss the fact that most deployments nowadays do not adapt back to you as fast.
sebastianconcpt: Agree, is a fiction based in accepting the premise of zero-sum game.It denies that more advanced civilizations might have better models of the universe where they know this isn't an issue and we're just stupid teenagers in the neighborhood playing dangerous games and merely taking a look every now and then to see if we prove we will survive ourselves.
middayc: I hope the open source models / crowdsourced approaches to training will also be an important part of the ecosystem, keeping it honest and providing an exit. Similarly, as it does for operating systems and other important software.But I don't see a trend of big companies really opening up. They usually open only if it benefits them (which can also happen and did happen in various scenarios). Everybody is accepting and open when it's trying to grow and is closing once it can reach a monopoly.
manquer: [delayed]
p2detar: [delayed]
bethekidyouwant: That’s true among human societies as well, but trade leads to more prosperity.
jmull: I really liked those books, for all the creative ideas... it's fine that they don't all work, but the Dark Forest has to be among the worst of them. It was unfortunate it was highlighted.Some rebuttals, going point by point...1. you can know the intentions of other entities by observing and communicating with them.2. technology explosions, like pretty much exponential phenomena, are self limiting. They necessarily consume the medium that makes them possible.3. and 4. civilizations aren't necessarily sentient (ours certainly isn't) and don't have an agency, much less goals. Individuals have goals, and some may work for the survival of the civilization they belong to. But others may decide they can profit if they work with the aliens.4. Multiple civilizations may well come into competition over resources, but that's more of an argument about why the forest would not be dark.Practically speaking, a civilizations that opts to focus on massive, vastly expensive efforts to find and exterminate far flung civilizations because they may become a rival in the future may be easily outcompeted by civilizations that learn to communicate with and work with other civilizations they encounter.
alembic_fumes: > This is the true horror of the cognitive dark forest: it doesn’t kill you. It lets you live and feeds on you. Your innovation becomes its capabilities. Your differentiation becomes its median.Oh no, the terrible dystopia where anyone can benefit from anyone else's good ideas without restrictions! And without any gatekeepers, licensing agreements, copyright, and not even a lawyer in sight!If this is the dark future that AI use brings for us, I say bring it. Even if it means that somebody gets filthy rich in the process, while making the rest of the humanity better off.
AnimalMuppet: It's first-order thinking. Second-order would be to question whether trying to eradicate another race might motivate them to eradicate you, when they weren't motivated to do it before.
movedx: If AI makes replicating other people’s ideas faster and easier, thus allowing capital-heavy market players to just absorb whatever idea you manage to execute, then perhaps, somewhat ironically, the economic moat you’ll have is your human nature, contact, and time? Perhaps we’ll see a shift in sentiment towards wanting to deal with and spend time with the people in the business, rather than just what the business can do for you and yours from a software perspective?I believe the idea of “off-shoring” your IT is a good example of this. My brother works for a business whose clients would drop them the moment they off-shored any aspect of their IT support. Not because of data sovereignty, but simply because they value them being on-shore, in the same time zone, and being native English speakers. And this is despite the fact it would drop the prices they’re paying for IT by 30-40%.
lstodd: And the idea does not make sense once you include intel being incomplete into the equation: what if the preemptive strike will not attain complete eradication?You might or might not fatally cripple the opponent, but retaliation can do that too and you cannot be sure that it won't. It's MAD all over again.
0x3f: Well if they're only an upstart, they don't have the ability to destroy you _yet_. You 'nuke' them in the hope they won't get that ability. You're aiming to stop MAD from being a thing.In those terms, the US should have been nuking and dominating everyone, and the idea was floated after WW2, but I believe they were precluded by practical limitations.If they had developed the tech outside of wartime, and built up a stockpile, maybe that is indeed what would have happened and we'd have a one-world government already.
xantronix: I have been mulling this over and I think I have some solutions in mind, at least for myself.• No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.• Bring back LAN parties. Not for gaming necessarily, but for the purpose of exchanging works of engineering and art in an intimate, intentional way.• Take this as an opportunity to build closer, longer-lasting relationships with people.• No more emphasis on metrics. I can microdose on dopamine from natural sources, like, looking at a beautiful sky at sunset, or cuddling my dog.• Open hardware, or, in the very least, hardware we can still control on our own volition. If this means we must be retrocomputing enthusiasts, then so be it.
chongli: New models literally do get worse after launch, due to optimization. If you charted performance over time, it'd look like a sawtooth, with a regular performance drop during each optimization period.That's the dirty secret with all of this stuff: "state of the art" models are unprofitable due to high cost of inference before optimization. After optimization they still perform okay, but way below SOTA. It's like a knife that's been sharpened until razor sharp, then dulled shortly after.
gobdovan: Is this insider info? The 'charted performance' caught my eye instantly. Couple things I find odd tho: why sawtooth? it would likely be square waves, as I'd imagine they roll down the cost-saving version quite fast per cohort. Also, aren't they unprofitable either way? Why would they do it for 'profitability'?
lstodd: Point is you cannot know if they are an upstart (whatever upstart means). It can be misinterpretation, it can be camoflage, it can be anything. But once you rain death you're better be prepared to be grateful for what you are about to receive back.
0x3f: Depends on the context. We certainly knew nobody else had nukes.
djeastm: Same here. Knowledge is being commodified.
griffzhowl: > The platform doesn’t need to bother with individual prompts - it just needs to see where the questions cluster. A map of where the world is moving.This was insightful, but is it much different to the kind of data google and other search engines have had access to for a long time?And while LLMs might have sped up the rate of code generation, the tech giants have always been able to set a team on reverse engineering whatever they feel like, though they also often just bought up the startup that was producing what they wanted. I guess I'm not seeing exactly where LLMs specifically are creating the dark forest, rather than the consolidated, centralized tech landscape itself
akabalanza: Big up for the reference
orbital-decay: Some of that is rose-tinted glasses.1. Sharing was never really safe, open source by default only became possible because of SaaS and rent-seeking behavior.2. Early web (not internet) wasn't hyperconnected. With the advent of global-scale social media it was immediately obvious to many this will lead to monoculture and reduced diversity. What thought to be the information superhighway became the information superconductor with zero resistance, carrying infinite current. Also known as short circuit.
lstodd: That.. was the case for all of four years. And forgive me if I doubt certainity.
layer8: > Resistance isn’t suppressed. It’s absorbed. The very act of resisting feeds what you resist and makes it less fragile to future resistance.On the other hand, if your primary goal is to change the world, or “be the change you want to see”, maybe being public and feeding it isn’t so bad, especially if others don’t?
arkensaw: I don't know, I think it's an overreaction.> No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.If you have a project you would have open-sourced, and you don't do that for fear that the LLM god will steal it, what's the point of building it at all? We shouldn't be afraid to share things with other humans just because LLMs will possibly use it as training data. So what if they spam out a copy of it, or a derivative?If we all stop sharing things with each other in case one of us is a robot, we might aswell just lie down and die
chongli: It's not insider info, it's common knowledge in the industry (Google model optimization). I think they are unprofitable either way, but unoptimized models burn runway a lot faster than optimized ones.The reason it's not a square wave is because new optimization techniques are always in development, so you can't apply everything immediately after training the new model. I also think there's a marketing reason: if the performance of a brand new model declines rapidly after release then people are going to notice much more readily than with a gradual decline. The gradual decline is thus engineered by applying different optimizations gradually.It also has the side benefit that the future next-gen model may be compared favourably with the current-gen optimized (degraded) model, setting up a rigged benchmark. If no one has access to the original pre-optimized current-gen model, no one can perform the "proper" comparison to be able to gauge the actual performance improvement.Lastly, I would point out that vendors like OpenAI are already known to substitute previous-gen models if they determine your prompt is "simple." You should also count this as a (rather crude) optimization technique because it's going to degrade performance any time your prompt is falsely flagged as simple (false positive).
entropi: Unless you don't own the data centers yourself, you only get what they allow you you to. And those gatekeepers, lawyers and licencing agreements; while certainly not perfect, did let people monetize their intellectual work. Also, I think it is incredibly naive to think the owners of the compute and the energy won't play the hardest gatekeeper the world has seen, when the conditions become right.
Skyy93: This article makes no real sense to me.>You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.>We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.Why should this happen? The moment you make your idea public, anyone can build it. This leads to greater proliferation than before, when the artificial barrier of having to learn to code prevented people from getting what they wanted or what they wanted to create.
0x3f: Four years is plenty of time to start launching. Also, MAD incentivizes disclosure. What would be the point of having secret nukes? Openly having them is the only way to stop the US using its nukes to stop your nuke program, in this scenario.
__MatrixMan__: I think this only applies to a rather narrow set of ideas.I'm not really interested in pursuing ideas that stop being good if somebody gets there first. If I bothered to design it its because I wanted it to exist and if somebody makes it exist I should be happy.So what kind of things does this apply to? Likely, it's zero sum games, schemes to capture and control other people, ways to be the first to create a new kind of artificial scarcity, opportunities to make a buck by ruining something that has been so far overlooked by other grifters. In other words: bad ideas.If AI becomes a threat to those who habitually dwell in such spaces, great, screw em.
girvo: > If you charted performance over time, it'd look like a sawtoothPeople have, though, and it doesn't show that. I think it's more people getting hit by the placebo effect, the novelty effect, followed by the models by-definition non-determinism leading people to say things like "the model got worse".