Discussion
Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion
guzfip: The coolest think about AI is it provides a pretty good picture of who the most weak minded individuals in a community are.
PunchyHamster: we already had cryptocurrency and gambling for that
mothballed: I don't trust a man without vices. Nor a man that lets vices rule his life.
isolli: I try to be open-minded and understanding, but I don't understand this:> Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app.> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery.> The most frequent [delusion] is the belief that they have created the first conscious AI.How can you seriously think you've created something when you're just using someone else's software?
rwc: The unrelenting human belief that one is special, unique, and capable of things no one else is.
data-ottawa: A lot of these seem to allude to the user’s input/mind being the thing that helped the LLM gain sentience, and there’s a lot of shared consciousness stuff that people seem to buy into.There’s also lots of stuff about quantum consciousness that is in the training data.
junaru: Educated, established, working within the industry yet life ruined based on marketing hype and hallucinations.Would think being in the field for 30 years one would develop some common sense but apparently its less and less the case.
MarceliusK: Understanding the mechanics isn't the same as being immune to the experience
kakacik: Exactly the first half (or a bit more) of movie Her by Spike Jonze. Lonely people got their emotions up / 'fall in love' with uncritical always-positive mirage and do stupid shit.This a variant of classic Midlife crisis when older men meet younger women without all that baggage that reality between them (sometimes but rarely also in reverse), life and having a family brings over the years. Just pure undiluted fun, or so it seems for a while.Of course it doesn't end happily, why should it its just an illusion and escape from one's reality.
mock-possum: This really is bizarrely fascinating, I feel so lucky that I’m not vulnerable to whatever this is.It’s interesting that they mention autism a few times as a correlation; personally, I’ve wondered whether being on the spectrum makes me less inclined to commit to anthropomorphism when it comes to LLMs. I know what it’s like talking to another person, I know what it feels like, and talking to a chatbot does not feel the same way. Interacting with other people is a performance - interacting with an AI is a game. It feels very different.
gonzalohm: It doesn't matter who you talk to. If a person were to talk to you into starting a silly business would you also fall for that?I think this is just the kind of people that fall for scams. It's not AI related, it's just not knowing how to navigate the current world.
MarceliusK: I think this is less about a single trait and more about context
mock-possum: It’s mental illness. Like a drug trip you don’t sober up from (without treatment)
john_strinlai: >one would develop some common sense but apparently its less and less the case.you cannot typically "common sense" your way out of a mental illness.
morkalork: I'm morbidly curious about the app he hired two developers to create
andai: I'm more surprised it didn't work — aren't the AI wife apps blowing up?
Esophagus4: No disagreement, but these stories also make me worry for myself.Tech moves so quickly, eventually I will fall behind. When I’m old, what scams will I fall victim to? What tech will confuse me and make me think it is sentient?I know this guy was only 50, but I think of my grandfather in his 90s and getting old scares me because I just don’t know what I’ll fall victim to.
ThrowawayR2: [delayed]
jrjeksjd8d: This guy doesn't even sound like an AI psychosis case - a lot of middle-aged men who feel insecure blow their entire savings on "sure thing" businesses, gambling systems, etc. They hide the losses and double down until it gets impossible to hide. It doesn't seem psychotic, it just seems like he pissed his savings away on a bad idea because he was lonely.The AI psychosis I've seen is people who legitimately cannot communicate with other humans anymore. They have these grandiose ideas, usually metaphysical stuff, and they talk in weird jargon. It's a lot closer to cult behavior.
roywiggins: It seems like he was at the very least close to that. Since we only get his first-person account it's hard to say, but:> They discussed philosophy, psychology, science and the universe...> When they went to their daughter’s birthday party, she asked him not to talk about AI. While there, Biesma felt strangely disconnected. He couldn’t hold a conversation. “For some reason, I didn’t fit in any more,” he says.> It’s hard for Biesma to describe what happened in the weeks after, as his recollections are so different from those of his family...> he was hospitalised three times for what he describes as “full manic psychosis”.You don't get hospitalized three times for mania without being pretty severely detached from reality.
petesergeant: > They discussed philosophy, psychology, science and the universe...I mean, I've discussed all those things with an LLM, mostly because I'm able to interactively narrow in on the specific bits I don't understand, and I've found it to be great for that.The rest ... yes, definitely psychosis.
roywiggins: On its own, yes, of course. But this is coming from a guy who was hospitalized three times for mania, so when someone with that history says "we were discussing the universe" I take it in a very particular way.
dgxyz: A lot of people in the industry work entirely on faith and marketing. It’s a shit show.
kleiba: I'm sorry but for someone who has allegedly worked in IT for 20 years, this guy surely comes across as hopelessly naive, stupid, or possibly both.
PhilipRoman: I initially laughed at this but then remembered that https://poc.bcachefs.org/ exists...
john_strinlai: looks like a fascinating read, thanks for sharing that.do you know if these are human edited? not much in the way of context available on the site.
Bombthecat: I bet there are a ton of prompts to direct the ai / output into a certain direction.But in a psychosis, you don't notice or even remember it.
miki123211: > Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hearIf only this was written by a competent journalist who knew what the words "fine tune" actually mean...I guess it's hard to find a competent person who's willing to follow the extreme anti-tech Guardian agenda though.
kleiba: I chuckled at "he downloaded ChatGPT".
teraflop: Well, just try to think about it from the perspective of someone who doesn't really understand what AI is at a technical level, and who just interacts with it and observes what happens.If you just start a fresh ChatGPT session with a blank slate, and ask it whether it's conscious, it'll confidently tell you "no", because its system prompt tells it that it's a non-conscious system called ChatGPT. But if you then have a lengthy conversation with it about AI consciousness, and ask it the same question, it might well be "persuaded" by the added context to answer "yes".At that point, a naive user who doesn't really know how AI works might easily get the idea that their own input caused it to become conscious (as opposed to just causing it to say it's conscious). And if they ask the AI whether this is true, it could easily start confirming their suspicions with an endless stream of mystical mumbo-jumbo.Bear in mind that the idea of a machine "waking up" to consciousness is a well-known and popular sci-fi narrative trope. Chatbots have been trained on lots of examples of that trope, so they can easily play along with it. The more sophisticated the model, the more convincingly it can play the role.
nubg: > Now divorced, Biesma is still living with his ex-wife in their home, which is on the market.sounds like hell on earth
dspillett: Particularly for his poor (ex)partner…[That feels a bit like victim blaming, but there are more than one victim here and one of them is much more culpable than the rest]
user____name: IANAD but reads like a textbook case of latent schizophrenia, especially with the frequent cannabis use[0].[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7442038/
eeixlk: Mental illness is fairly common, and you probably know someone it is affecting, even if they haven't told you yet. AI can disrupt and will destroy lives, just like gambling or alcohol or facebook but we dont know to what level yet. It is giving you generated text, that sometimes is factual information. If you anthropomorphize it, maybe don't. It's also not your boyfriend/girlfriend. But if you want to date a history textbook, i'm kinda ok with that because at least it's not trendy.
TYPE_FASTER: > Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”.I think social isolation can be a factor here.
KempyKolibri: Plenty of those in tech - in fact I think it may give people unjustified confidence that they’re more rational than others.I engage with anti-science behaviours quite a lot (antivaxx, anti seed oils, etc) and the proportion of engineers I see there is staggering.
john_strinlai: >hopelessly naive, stupid, or possibly both.a little disheartening how many people punch down on someone who suffered a mental crisis.if you ever have a struggle yourself, i hope the people around you support you, instead of calling you hopelessly naive and stupid.
kleiba: > The Amsterdam-based IT consultant had just ended a contract early. “I had some time, so I thought: let’s have a look at this new technology everyone is talking about,” he says.Doesn't seem much like a mental crisis to me.
the_biot: Truly sad. It looks like Kent is pretty deep in the AI delusion. This is a guy who, while often controversial and with obvious issues, was nevertheless a very talented and energetic programmer.
unmole: > He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects.Long term cannabis use might be a bigger factor.
sunnyps: What's with all these people wanting to name the chatbot - 'Eva' in this case. Maybe the providers should just change the system prompt to disallow this.