Discussion
Why Do We Tell Ourselves Scary Stories About AI?
vdelpuerto: The framing of "scary stories" misses something interesting: most of the actual operational fear isn't about consciousness or superintelligence — it's about systems that seem to work until they quietly don't.
ggambetta: We tell ourselves scary stories about everything new. Advances in electricity + medicine == FRANKENSTEIN!
nalekberov: > “The last four years have demonstrated that AI agents can acquire the will to survive and that AIs have already learned how to lie.”Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.
yoz-y: Isn't the problem precisely that it does not take moral judgements?My opinion on all of this is constantly shifting, but right now my main issue is that-like self driving-it seems 90-95% correct and 5-10% catastrophically wrong.Due to the sheer speed and volume of output it produces I have grown complacent and exhausted, so when I give it simple tasks I assume it is correct and then is the time when "it deletes" all of your files.
everdrive: One thing that strikes me that I never really see anyone discuss is that we've been afraid of conscious computers for a _long_ time. Back in the 50s and before people were quite afraid that we'd build conscious computers. This was long before there was any sense that could actually accomplish the task. I think that similarly to seeing faces in the clouds, we imagine a consciousness where none exists. (eg: a rain god rather than a complex system of physics and chemistry)Even LLMs, which blow past any normal Turing test methods, are still not conscious. But they certainly _feel_ conscious. They trigger the same intuitions that we rely on for consciousness. You ask yourself "how would I need to frame this question so that Claude would understand it?" You use the same mental hardware that you'd use for consciousness.So, you have an historical and permanent fear of consciousness in a powerful entity where no consciousness actually exists combined with the fact that we created things which definitely seem conscious. (not to mention that consciousness could genuinely be on its way soon)
ACCount37: Are they? Not conscious?If you list out every prominent theory of consciousness, you'd find that about a quarter rules out LLMs, a quarter tentatively rules LLMs in, and what remains is "uncertain about LLMs". And, of course, we don't know which theory of consciousness is correct - or if any of them is.So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?
0x4e: Agreed. But, could it be trained ti be deceiving? Especially when we bake-in advertising into it?
bharat1010: The point about AI companies actively hyping the danger of their own products is something I hadn't really thought about before — it's a strange kind of marketing when you think about it.
ACCount37: It's simple. It's because AI is the scariest technology ever made.Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.
afavour: It does feel like a bizarre moment, where the AI companies are deliberately trying to scare us about their own product in a bid to, I think, show the inevitability of it? Or to sell themselves as the one responsible power to constrain it?It's very odd. "It's going to take all your jobs" is not a great selling point to the everyday public.
bpodgursky: They are being honest and you don't want to deal with the implications, so you stretch for conspiracy theories.The ones at the top are the true believers. Engage with them at that level.
ramon156: I feel like this article is more written towards non-techies. A decent amount of programmers have touched coding agents, and know it "kind of" does it's job. It's good enough for some tasks... I cannot be arsed to figure out how to edit a graph in Drupal, so I ask Claude. Claude fixes it, and it's not anymore broken than it already was. Win win.However, that's where I stop my agent usage. I let ~~Claude~~ GLM do the following: - Fix tedious tasks that cost me more to figure out than I care for - Research something I'm not familiar with, and give me the facts it had found, and even then I end up looking at the source myself
Rzor: [delayed]
0x4e: Because we don't like uncertainty, and the AI future is uncertain. There are multiple high probability scenarios.Because we're seeing how its capabilities increase overtime. I find the rate at which I prefer to go to an AI than an UpWorker is scary.Because we——the people——are not in control of it. We're at the whims of whatever it and the tech bros want (technocracy).
xg15: The idea of "artificial beings" in some way or another seems to have been with humanity for a long time already: https://en.wikipedia.org/wiki/Golem
yanis_t: I wish we didn't call this AI as the term is crazily overloaded.Those are programs. The only difference is how we write them. Not with "if"s and "for"s. We take a bunch of bits that do nothing. Then we organize them in a way so that it outputs whatever it is we want.
jacquesm: The fact that it's a box with a plug and a state that can be fully known. A conscious entity has a state that can not be fully known. Far smarter people than me have made this argument and in a much more eloquent way.Turing aimed too low.
saHqtr: And the chatbots don't even pass the Turing test.I've never had a normal conversation. It's always prompt => lengthy, cocksure and somewhat autistic response. They are very easily distinguishable.
netdevphoenix: There is a very interesting book that explores the West's generally negative view of artificial intelligence whenever it is portrayed in media (Skynet) while Japanese media tends to have a positive view (Astro Boy).
fontain: The world we live in is a construct, not a natural outcome. Even if we take your premise at face value, that our success as a species is only because of advantages over others, what's to say that "intelligence" is that advantage? What's to say that we don't use our advantages to reconstruct a world that works in a way that doesn't advantage intelligence over all else?And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.
deepsquirrelnet: There are so many reason if you look at how it's being sold.* We need to completely deregulate these US companies so China doesn't win and take us over* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)* If you don't use AI, you will not be able to function in a future job* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is rightThey have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction.
sublinear: This is untrue. What is being diminished is the value of humans doing repetitive or uncreative tasks.Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.
bauerd: The vast majority of people on this planet work repetitive, uncreative jobs.
saHqtr: Most humans can do more than plagiarizing text. But let's hype up the clankers before the IPOs.
MyHonestOpinon: They are distinguishable because they know too much. Their knowledge base has surpassed humans. We have also instructed them to interact with us in a certain manner. They certainly are able to understand and use human language. Which I think was Turin's point.Purely retorica but, would you be able to distinguish a chatbot from an autistic human?
psychoslave: Machine still need a planetary complex production pipelines with human operators everywhere to achieve reproduction at scale. Even taking paperclip plant optimizer overlord as serious scenario, it’s still several order of magnitude less likely than humans letting the most nefarious individuals create international conflicts and engage in genocides, not even talking about destroying vast pans of the biosphere supporting humanity possibility of existence.That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".
GolfPopper: Why does the uncanny valley[1] exist? (If it truly does.) What in our evolutionary history gave us a reflexive rejection of things that seem human but aren't?1. https://en.wikipedia.org/wiki/Uncanny_valley
zbikowski: I always imagined this to have evolved from a long history of humans getting sick around rotting corpses. The logical move is to stay away from them, and thinking they're freaky-looking is a good driver for that. Though the idea of neandertals eliciting a similar reaction has always been interesting to me.
dryarzeg: > technocracyWhat you mentioned is not a technocracy. Technocracy is when all decisions are made by real specialists in the field, based on scientific methods (simply speaking). What you mentioned is a plutocracy, a form of oligarchy in which decisions are made by people of great wealth.I couldn’t just ignore this because, in my view, technocracy (as I’ve described it) still has some merit - for instance, appointing only genuine economists to head a hypothetical Ministry of Economy makes some sense - whereas oligarchy and plutocracy have nothing good to offer. Of course, this is just my personal opinion.
afavour: I don't know, I think nuclear weapons are scarier. And also probably a useful parallel: they're so dangerous that we coined the term "mutually assured destruction" and everyone recognized that it was so dangerous to use them that they've only ever been used once.I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.
SpicyLemonZest: Everyone recognized that it was so dangerous to use them after the first two mass casualty events. At the time and even into the 50s it was not universally obvious, and the arguments in favor of nuclear weapons use were quite similar to arguments I often see with regards to AI: bombing cities into rubble is not a new concept, traditional explosives well within the supply capacity of large militaries are capable of it, so what are we even talking about when we say that there's scary new capabilities?
altairprime: [delayed]
otabdeveloper4: > AI is the scariest technology ever madeWell, it's a good thing that all we managed so far is a large language model instead.
ThrowawayR2: [delayed]
ACCount37: I remind you of why nuclear weapons exist.They exist because human minds conceived them, and human hands made them.One of the major dangers of advanced AI is being able to implement something not unlike Manhattan project with synthetic intelligence, in a single datacenter.
dclowd9901: Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.The line of consciousness, as we understand it, is understanding. And as far as what actually constitutes consciousness, we're not even close to understanding. That doesn't mean that LLMs are conscious. It just means we're so far from the real answers to what makes us, it's inconceivable to think we could replicate it.
ACCount37: Leave aside "the details" like you being obviously, provably wrong?We've known for a long while that even basic toy-scale AIs can "grok" and attain perfect generalization of addition that extends to unseen samples.Humans generalize faster than most AIs, but AIs generalize too.
everdrive: This isn't meant to be an answer that would satisfy everyone, but in my opinion consciousness is a specific adaptation that has to do with managing status, relationships, and caring for young. It's about having an identity and an ego and building mental models of the egos / identities / etc of others.I don't think there's any reason we couldn't in principle attach this sort of concept to an LLM, but it's not something we've actually done.
lopsotronic: By any quantifiable measure, yes, and not by small numbers either.Until someone can demonstrate a quantitative measure of intelligence - with the same stability of measurement as "meters" or "joules" - any discussion of "Super-AI" as "the most dangerous X" is at best qualitative/speculative risk narratology, at worst discursive distractions. The architecture of the "social web" amplifies discursion to a harmful degree in an open population of agents, something I think we could probably prove mathematically. I am more suspicious of this social principle than I am scared of Weakly Godlike Intelligence at this moment in history; I am more scared of nuclear weapons than literally anything else.People think we are out of the woods with nuclear weapons, but I don't think we've even seen the forest yet. We are Homo Erectus, puffing on a flame left by a lightning strike, carrying this magic fire back to our cave.
everdrive: >That same fear is directed towards human sociopathy, as much of entire thriller genre indicates.This is a great insight, and I think in general people have a pretty broken view of what sociopathy is.
SpicyLemonZest: I think most AI execs I'm familiar with would, if they were the god-monarch of humanity, recruit real specialists applying scientific methods to make most decisions. They seem like the kind of people who would understand that the Ministry of Economy is doing valuable things which shouldn't be compromised for personal expediency. Does that really make it any better?
dryarzeg: > if they were the god-monarch of humanityIn that case, we're not talking about an oligarchy or a technocracy either. What you have described is an autocracy - a rule by one. When there's some kind of "god-monarch", the people heading the Ministry of Economy will be controlled by this "god-monarch" and it's unclear if this can be called technocracy or not (at least it is unclear to me; maybe I'm stupid, who knows).> Does that really make it any better?Honestly? If you're asking about "would it be any better than now" - I'm not really sure, because I'm not in power to access the actions of the people who hold the positions equivalent to the head of Ministry of Economy - the economy is not my field, I'm not a specialist here. I would only point to the example I'm familiar with (and you're probably not; I'm sorry, I just couldn't think about something like this that I can verify) - in Ukraine, there's a "Ministry of Digital Transformation". This ministry was headed by a Mykhailo Fedorov, whose primary and as far as I know, he studied at the "Faculty of Sociology and Management". Well, that’s not the main point, as he’s studied elsewhere too. The problem lies elsewhere. His decisions have been criticized on more than one occasion by genuine experts - for example, the project known as "Diya", or "the state in a smartphone"; in short, it’s something like access to documents and various government services all in one app. It’s a long story… In short, as a result, there were (presumably) data leaks, and the service crashed more than once or twice due to its flawed security, and all sorts of problems were found with it - you name it, it had it. It's such a shame, to be honest... You can't just go and play with things like that. And now that person is serving as a... head of Ministry of Defence. Hell. To add insult to the injury, guess who is now taking his place at a Ministry of Digital Transformation? Oleksandr Bornyakov, who, as far as I know, holds a degree in marketing. Marketing. ...Nice. Well maybe I don't know something, who knows, who knows... but the decisions, or, rather, their consequences seem to be... let's settle with "terrible".I'm pretty sure you can recall some similar examples yourself. My point is that although the scenario you described is not as good - because, I guess, no one really wants "god-monarch" controlling (although not making directly) all of the decisions. But if our hypothetical Ministry of the Economy were run by genuine experts who, moreover, work for the good of society - or at least the state as a whole - rather than just lining their own pockets, well, that sounds better than an idiocracy. That was my point.
andrewmutz: Modern discourse happens on social media where fear and outrage drive engagement, which drives virality. We have become convinced in a short amount of time that AI is going to take all the jobs and eventually kill us all because that's what people click on.Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.
AlecSchueler: > Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.It's not an either/or thing though. Compare to something like combustion. Sure it definitely improved productivity but also lead to countless violent deaths.