Discussion
PaulHoule: ... been saying this for years. If you really believed what Yudkowsky says you wouldn't just be posting on lesswrong, you would be taking direct action against a clear and present danger.
jmull: No you wouldn't.Look at what the molotov cocktail guy accomplished by "taking direct action against a clear and present danger": Nothing, besides casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.It's downright dumb to attempt to impose your will via unilateral violence when you aren't in a position to actually complete the goal. Note that that goes whether you're actually right or not.
hax0ron3: >casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.I think the majority of the population at large either doesn't care about what happened or wish that the guy had actually managed to kill Altman. Not even necessarily because Altman is involved with AI, but just because he is extremely rich. I don't imagine any increased resistance from the population at large, the population at large either doesn't mind when rich people are killed or loves it. The exceptions would be people like entertainers who develop a parasocial relationship with the public and provide direct joy to people, but AI company leaders don't fall into that category.That said, it is true that killing Altman would almost certainly achieve nothing. See my other post in this thread.
arduanika: This has been decades in the making. We had premonitions of the violence that would come, for example with the Zizians. Get ready for what happens when a million blogposts worth of bad philosophy, bad metaphysics, and bad heuristics a deeply indoctrinated into a vast, decentralized network of highly capable engineering minds.They hate the framing that LLMs are just stochastic parrots, which is ironic, because Yudkowsky's many parrots are a latent network of stochastic terrorists. That network has just been activated.
tcoff91: I have a different perspective on this given that I view climate change as the biggest threat we face as a species.I feel like robotics is the only hope we have to be able to scale action against climate change. It's clear that emissions reduction is just not going to happen, and catastrophic warming is inevitable. Therefore we will have to do an extraordinary amount of labor in order to modify our environment to save civilization from sea level rise and to be able to repair damages caused by natural disasters. There just aren't enough humans to do everything that is going to need to be done.It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.Given that we need better AI in order to make these robots happen, I view AI as a critical technology that we need to maintain civilization.
dpark: > It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.Yes, but also 100k firefighting robots is kind of a lot. How many firefighting robots should exist in the world? And how many seawall-building robots for the rising sea level? And how many other robots? At what point does the environmental cost of all these robots offset their benefits?
throwaway27448: Obviously, ineffective action will be counterproductive. I recommend effective action.
irishcoffee: https://www.howeandhowe.com/civil/thermiteThe firefighting robots of which you speak already exist.
graemep: That is interesting, and I think you are right that emissions reductions will not happen any time soon (eventually, but it will take a while).I am not convinced we need robots. A lot of it is not all that hard to do. For example, better forestry management to prevent forest fires. A lot of cities rebuild big chunks of their infrastructure over a century or so anyway. The problem is more social and political - you get worse forest management because you can blame climate change when it happens.
derektank: Wouldn’t geoengineering through stratospheric aerosol engineering (likely with sulfates) be both cheaper and less technically challenging than changing the built environment? If we’re accepting massive climate changes anyways, it seems like taking the risk with solar radiation modifications would be the next step
vrganj: Yeah I mean Lenin recognized that a century ago.The only meaningful way to affect change against the oligarchy is and always has been violence.This is not a novel insight.
MostlyStable: It is completely coherent to both think that an extremely bad thing is coming, and yet that does not justify any particular action. "The ends don't justify the means" and literal entire religions have been built on this concept. It is not irrational or incoherent to believe that even something as serious as extinction does not justify arbitrary action.Someone _may_ decide that it does, but it is not a necessary conclusion.And that is completely aside from the many many (in my opinion convincing) arguments that such acts of violence would not be effective anyways.This article is a much better (and much longer) extension of the argument and direct refutation of the OP articlehttps://thezvi.substack.com/p/political-violence-is-never-ac...
Joker_vD: > "The ends don't justify the means"Eh. The ends do justify the means, but only inasmuch as those means actually do help to achieve the ends — astonishingly often, they don't (and rarer, but also often, actually bring you in the opposite direction from those end goals), and so remain unjustified.
MostlyStable: I personally believe quite strongly that some things are just immoral on their face and that I would rather fail/die without using them than succeed/live while using them. I agree that in very many cases where people do these things, they are, in the long run, counter productive, but I also believe that even if could be conclusively proven that this wasn't the case, I would still advocate against their use.
hax0ron3: I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.The problem with trying to stop it is, how? Even if you killed every single AI company leader and every single top AI engineer, it would almost certainly just slow down the rate of progress in the technology, not stop it. The technology is so vital to national security that in the face of such actions, state security forces would just bring development of the tech under their direct protection Manhattan Project-style. Even if you killed literally every single AI engineer on the planet, it's pretty likely that this would just delay the development of the technology by a decade or so instead of actually preventing it.The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it. It's very similar to nuclear weapons in that regard. You can talk about nuclear disarmament all you want but at the end of the day, having nuclear weapons is vital to having sovereignty. If you don't have nuclear weapons, you will always be in danger of becoming just the prison bitch of countries that do have them. AI seems that it is growing toward a similar position in the calculus of states' notional security.I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.
squigz: > I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.Can't you? Haven't many (most?) countries agreed to nuclear disarmament? What about biological weapons? Even anti-personnel mines, I think?
hax0ron3: Those weapons are still all being developed and would be brought out in any actually existential war where they seemed useful. The agreements would last only as long as the wars were not existential, or as long as the various countries involved believed that use of them, and the resulting retaliation in kind, would be more destructive than not using them. But one way or another, countries still develop them.
dweinus: I don't think it needs to be a binary to be effective. Yes, those weapons still exist, but understanding of existential risk and political pressures have slowed them considerably and resulted in a safer, more cautious world.
arduanika: Upvoted because this is an interesting take, but I disagree at least somewhat. I think you should be wary whenever you narrowed down the options to, "in order to solve the top-priority problem X, our only hope is solution Y."I agree that some technological solution might be the key to dealing with the climate, and that maybe robots would be part of such a solution, maybe powered by similar techniques as the current wave of AI. It's not an insane scenario, but it's worth keeping your perspective open to other possible developments.
morningsam: >The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it.I don't remember who, but someone made an interesting point about this around the time GPT-4 was released: If the major nuclear powers all understand this, doesn't that make nuclear war more likely the closer any of them get to AGI/ASI? After all, if the other side getting there first guarantees the complete and total defeat of one's own side, a leader may conclude that they don't have anything to lose anymore and launch a nuclear first strike. There are a few arguments for why this would be irrational (e.g. total defeat may, in expectation, be less bad than mutual genocide), but I think it's worth keeping in mind as a possibility.
jmull: People are basing their entire world view on not understanding the nature of exponential phenomena.Exponential phenomena only begin in a medium that holds the potential for that phenomena, and necessarily consume that medium.That is, exponential phenomena are inherently self-limiting. The bateria reaches the edge of the petri dish. When the all the nitroglycerin is broken up the dynamite is done exploding.That doesn't mean exponential phenomena aren't dangerous -- of course they can be. I mentioned dynamite, after all. And there are nukes.But it's really far from "AI is improving exponentially now" to "AI will destroy everyone".I see AI companies consuming cash at unsustainable rates. Since their motive is profit, this is necessarily limiting. Cash, meanwhile is a proxy for actual resources, which have their own, non-exponential limitations -- employees, data centers, electricity, venture capitalist with capital, etc.AI isn't going to keep improving exponentially -- it can't. Like every other exponential phenomenon, it will consume the medium of potential that supports it (and rather quickly).
greenavocado: > People are basing their entire world view [on things getting worse because their leadership is abandoning them or actively working against their interests]We understand hard times and are willing to work together to solve problems, but not when leadership is actively harmful.Fixed that for you.
morningsam: Yudkowsky himself also posted a rebuttal today: https://x.com/ESYudkowsky/article/2043601524815716866
doctorpangloss: I find all of this stuff very interesting but nonetheless these two voices sound like they could never win an election and aspire not to. That is the ultimate test of the worthlessness of a policy - it's all equally worthless until it wins an election, and that's what makes it reality.AI Doomerism versus Accelerationism are both playful fantasies, it doesn't really matter what measurements or probabilities or observations they make, because the substantive part is the policies they advocate for, and policies are meaningless - all equally worthless - until elected.What am I saying? The best rebuttal is, get elected.
hn_throwaway_99: The older I get, the more I get the sneaking suspicion that statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.An ongoing conflict has resulted in the violent deaths of literally many thousands of children. The people who enable those deaths are usually safely ensconced thousands of miles away, often living in cushy suburbs.To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less. I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.
jmull: That's a completely separate point, is it not?Maybe write it up and post a top-level comment if you think it's a point worth making.
nitwit005: Mentally ill people often have a justification for their actions which is vaguely rational, but you'll notice the vast majority of people aren't doing what they're doing.These people just get attracted to political causes somehow. Even the woman's suffrage movement had some people setting buildings on fire.
kelseyfrog: War is a mere continuation of policy by other means[1]. When policy through legislation is empirically impotent[2], calls to continue attempts at a failed strategy are indistinguishable from being told, "continue losing."There is a real, undeniable, build up of political tension. When it fails to be released in the legislative arena, it doesn't dissapate. When we point out that, "the quality of life right now is the best it's ever been," it doesn't dissapate. When we try to crush it, it doesn't dissapate. The last remaining pressure release is violence however condemnable it may be. Perhaps we should, you know, fix participatory democracy rather than pontificating on a natural outcome of machine we created yet refuse to fix. If fixing it continues to be more difficult than eliminating violence we should continue to expect violence.1. https://oll.libertyfund.org/pages/clausewitz-war-as-politics...2. https://archive.org/details/gilens_and_page_2014_-testing_th...
unethical_ban: "Those who make peaceful revolution impossible will make violent revolution inevitable."Wealth inequality isn't just about economic wellbeing but political power. Separately, the US legislature is almost entirely crippled, only able to pass one major bill per presidential term, while the dominant political party celebrated this and cedes all power to an executive whose intention is to tear apart the administrative state and bring about techno feudalism.I once again note that none of the AI leadership has even tried to address government policies to guarantee a baseline of economic wellbeing for our citizens, while they acknowledge AI will likely have massive, disruptive impacts on society and economy. Anthropic is the only one that has shown any public concern for the dangers of AI by insisting on some moral baseline of AI use in the Defense department.
arduanika: > "fix participatory democracy"Ah yes, a popular codeword for "I did not get my way".There is no electoral majority behind the AI doomer cult. It is not a failure of "democracy" that they haven't gotten what they want. It is a failure of their activism, or just the general unpalatability of their wild ideas, or both. They don't get to throw Molotovs just because they lose.
kelseyfrog: Ah yes, "continue losing."Go ahead and read Gilens and Page and tell me participatory democracy is working. Until then, expect more of the same impotent condemnations and a refusal to understand the social mechanics producing acts of violence.
dpark: Ah, yes. Let us spray more sulfates into the air. Let’s fight global warming by poisoning all the waterways and oceans with more acid rain.
derektank: The sulfate concentrations required to meaningfully reduce solar radiation is orders of magnitude below the level that causes acid rain. The Tambora eruption didn’t result in global acid rain (though it did in Indonesia, naturally) while cooling the globe by at least half a degree Celsius if not more. And on top of that, there are other possible aerosols we could use, like calcium carbonate
janalsncm: Your reasoning makes sense under a regime of infinite games. In other words, the goal is to continue playing the game rather than win once.These people do not believe we are in an infinite game. They believe they have a narrow set of moves to avoid checkmate, and apparently getting rid of Sam Altman is one of them.I will suggest another reason though. I don’t think the death of Sam Altman or even the dissolution of OpenAI would stop the continuation of AI development. There are too many actors involved, and too many companies and nation states invested in continuing AI development. Even Eliezer Yudkowsky became president of the United States he could not stop it.
xrd: That was really fascinating. Thanks.
arduanika: I am aware of their arguments, yes, but what I'm objecting to is that you're bringing this irrelevant hobbyhorse into a discussion of a truly fringe ideology. We're not talking about a classic G&P-style issue where the voters and the elites disagree. Nobody cares for the AI doomers -- not elites, not voters, nobody.When you talk about "participatory democracy" in a thread like this, you are enabling them in their delusion that people do care. The AI safetyist think tanks put out these pushpolls trying to convince themselves that voters care about AI doom. They seal up the walls of their echo chamber, and they believe themselves to be heroes. Then one day, one of them throws a Molotov, and nobody is surprised.
atmavatar: > "The ends don't justify the means" and literal entire religions have been built on this concept.Most religions rely on a supernatural force judging us post-mortem to balance out the rights and wrongs done during life.The problem with this, of course, is that there's zero evidence this force exists, and relying on this force to right the wrongs in life only serves to prevent the masses from attempting to correct the wrongs themselves either directly via vigilantism or, more importantly, by replacing existing systems with ones which will serve them better.I'm all for fixing things first via the soap box and ballot box, but sometimes the ammo box is the only resort left. The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants. - Thomas Jefferson I don't believe we're at that point in the US, but I could certainly understand someone making that claim for a country like Iran.
janalsncm: > The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.When the British cavalry came to Virginia in 1781, Thomas Jefferson famously fled the governor’s mansion.
BurningFrog: I agree, but it's only half of the equation.Your solution also can't be worse than the problem it solves!Overly clear example: Killing your noisy neighbors actually achieves the end of a quiet home. But that really doesn't justify it.
solaarphunk: This is just a version of individualism vs the state. Much of western society has become increasingly confused about what violence is acceptable, let alone who should be allowed to commit violence, or have a monopoly on violence.If we can't agree on that baseline, then its quite obvious that we'll continue to have an escalation in the types of violence that we've seen in the past few years, against the political and corporate classes in the US, with very little end in sight.
f1shy: Thanks.That sentence is constantly repeated, as if it would be some kind of absolute truth. The fact is, for every end, there will be probably some means that are totally justified, and some that not.I think the original context is: no matter how high, pure and perfect the end is, it does not meany any mean is justified.
kgwgk: According to Jocker_vD it’s only the means that won’t help that wouldn’t be justified.
sleepybrett: > There is no electoral majority behind the AI doomer cult.how can you be sure? has anyone polled it? are they too scared to poll it?
handoflixue: I found the last paragraph a fairly great summary of a rather long post:> How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer.
matthewdgreen: Eliezer Yudkowsky has gone so far as to say that it might be ok to kill most of humanity (excepting a "viable reproduction population") to stop AI. If that's not just talk, then this line reasoning only gives you a few possible modes of action. I would not be worried about the people with Molotov cocktails, but I'd be very worried about bio terrorism.
switchbak: China is rapidly building out their nuclear arsenal as we speak, and the USA is undergoing an expensive replacement process of theirs as well.That kind of idea might have held water in the 90's, but that's not the world we live in any longer.
eemax: > The Rational Conclusion of Doomerism Is ViolenceNo it isn't. The most prominent "doomer" has a strong grasp and deep, wholehearted appreciation for the the principles of liberalism and the rule of law:https://x.com/ESYudkowsky/status/2043601524815716866Which the author of this piece of slop appears to lack.
arduanika: It is true that only Yudkowsky gets to say what the rational conclusion of his ideas are. Nobody else gets to speculate. Only the pope of rationalism, because he's the rational one here. See? It's right there in the name!> this piece of slopCitation needed. Or maybe we need to update the title of that children's book for internet arguments: Everyone Who Disagrees With Me Is Slop.The Yud post you linked is not slop, either. It's not LLM-generated, nor is it insincere. But I do have to point out: He's the one who is slinging the tsunami of words here, not Alexander Campbell.
eemax: > It is true that only Yudkowsky gets to say what the rational conclusion of his ideas are. Nobody else gets to speculate. Only the pope of rationalism, because he's the rational one here. See? It's right there in the name!No, I am saying that Yudkowsky's views are straightforwardly compatible with bedrock principles of liberalism, and the author of the piece fails to acknowledge that compatibility or grapple with them himself. It's not about "rationalism" or who is "allowed" to speculate.I called it slop because it says false things that have the hallmark of LLM style, e.g.> The Sequences build the liturgy: a small caste of correct thinkers, epistemically and morally superior, whose rationality entitles them to govern what the rest of humanity is allowed to build. It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction.