Discussion
neom: It seems a bit more complicated than first blush: https://www.rdworldonline.com/the-neurons-playing-doom-are-a...Personally, dislike this direction a lot. I don't like that they're using a killing game (I understand the trope, doesn't make me like it any less) and the general idea of this whole thing makes me quite uneasy.
dustfinger: > We’ve combined lab-grown neurons with silicon chips and made it available to anyone, for first time ever.There is a line somewhere here that I personally feel we should not cross.
ZunarJ5: To be a fly on the wall in that ethics committee meeting...
kingkawn: Wasn’t this the original conceptualization for the Matrix?
virgildotcodes: 100%We know that neurons can produce subjective experience.This is the first time in my life that I've felt a scientific avenue of research should shut down.
namero999: We don't really know that.
blizdiddy: Animal testing, weapons testing, medical trials, cloning, psychological experiments… had you just never considered them before? Why this?
Frieren: Billions of living human brain cells have played Doom in a number of different devices for a couple of decades now.What would be surprising is for dead human cells to play anything at all.
vixen99: At the very very least there are more productive ways of spending time.
sillysaurusx: Be sure to dig into the details before taking this at face value. There once was a story "Rat brain flies plane" a couple decades ago, and it turned out to be bogus. But to find that out, you had to read the paper and reverse engineer that nothing substantial was actually going on. It's tempting to be charitable, but you can't really know whether headlines like this are legit till you understand exactly what they did.(The rat brain guys repeated the experiment until the plane stopped crashing, but no "learning" was happening; it was expected that when the neuron's range reached so-and-so, that the plane would fly level. So they started with a neuron outside that range, showed that it crashed, then adjusted the neuron until it flew level. But that's not what "rat brain flies plane" implies.)
birdsongs: I looked into it. They're not feeding the framebuffer to the neurons, but have a "signal" when an enemy is on screen to some of the tissue's inputs, and how to locate it in the x/y axis, and have outputs for the character to turn right or left or fire.It's "see this input signal, send these output signals", which seems consistent with the title.It seems they grow the neural tissue on a chip the neurons can interface with and send out / receive electrical impulses. They let the neurons self assemble, and "train" via reward or punishment signals (unclear to me what those are).Either way this makes me nauseous in a way I haven't experienced much with tech. The telling thing for me is, all these people are so excited to explain, but not once, ever, in the video speak of ethics or try to mitigate concerns.We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it? Can we, if we don't understand it ourselves? What are the plans to scale up?It's legitimately horrifying to me.
exe34: I have no mouth and I must scream.
wigster: it is a terrifying thought.
booleandilemma: Future robots will be powered by human brain cells. Companies will use them as conscious slaves and they'll get around slavery laws by saying they're not human.
DoktorDelta: The androids will dream of electric sheep.
bogwog: Sounds like you're applying scifi tropes to real life. Don't do that. That's why some people are developing "AI psychosis" today after playing with LLMs.
sunir: Do you feel like you have no mouth and you must scream?
api: Not sure why this is being downvoted. It’s a valid point. This neuron chip stuff is far less problematic than a lot of animal testing where you clearly have a whole organism that experiences something.Factory farming too. The way we treat chickens in particular is out of a horror movie, and that’s in countries with some standards. Globally I’m sure many billions of animals are constantly submitted to the most grotesque torture for food.
rickcarlino: It is going to be quite the ethical dilemma if/when these machines produce text output comparable to a modern LLM...
lp4v4n: It’s the first time I’ve heard about this company, and of course I haven’t taken the time to check how real their product is, but honestly, for me it’s very difficult to believe we currently have the technology to correctly integrate a living neuron into a chip, let alone compute anything meaningful with it.From what I’ve read elsewhere, our understanding of neurons is still very basic, and we need a lot more fundamental research before reaching results like these. We still don’t even properly know how migraines work, nor can we cure paraplegia, yet somehow we supposedly have the capacity to grow second brains and program them on top of that.
dlcarrier: I've never understood why they do this research with human neurons when any neurons would do.
thierrydamiba: Same reason people get scared to fly but drive everyday. Humans are simultaneously wildly irrational and terrible at calculating risk.
newsy-combi: You're equivocating two totally separate things
jstummbillig: > all these people are so excited to explain, but not once, everWhat do you mean? What is this class of people in your mind? There are tons of people who consider and talk about the ethics behind what they are doing, long before most people would think it remotely relevant (leading AI labs being an example, and I know the same to be true of various geneticists startups).I do agree that the entire presentation in this case is bewildering.
delichon: > It's legitimately horrifying to me.Would you feel any differently if a product from this tech used the user's own neurons grown from their stem cells?
everforward: I haven’t looked into it deeply either.To my knowledge, we understand how an individual neuron works quite well. We just don’t really understand macro effects in large networks of neurons.The video seems buzz wordy. Without looking into this too deeply, it seems like they’re using neurons individually or in small groups rather than creating a true “brain”. I would guess they’re using neurons or small groups of them sort of like transistors that do a single basic thing rather than a full “brain” that they just feed images to.
bronlund: So the whole reality for this little brain is literally pure hell :D
birdsongs: Replying to myself: how long before one of these with the neuron count of a corvid and trained on pattern recognition gets plugged into a drone?This is a very dark path, and I could not trust the people in charge less.
ryeights: Those things all exist within our conscious realm. “Human brain cells in a vat used for computation” suggests horrors beyond understanding
Gooblebrai: You don't need to understand how neurons work in detail to be able to use them to do something. In the past, we were able to use electricity for various purposes without knowing about electrons.
jmusall: [delayed]
wonger156: Hard to tell If the neurons actually learned to play doom or if its just the decoder that learned from the neuron responses. The disease modeling for this system is a very cool usecase though.
dang: (We changed the URL from https://corticallabs.com/doom.html since it points to this)
lp4v4n: But my point is: have we really reached a technological level where we can use neurons like replaceable car parts? That video seems to suggest yes, but I’m still skeptical.My impression is that this company is offering a product that’s still beyond our technological capabilities, much like the cold‑fusion startups that pop up from time to time.
DrewADesign: I’m kind of sick of how readily the non-managerial tech world accepts “what happens is someone else does this immoral thing before us?!” rhetoric as a real answer to questioning whether or not we should contribute our talent and ideas to something that we, deep down, know is bad for fellow humans.
Chris2048: > rhetoric as a real answerWhy is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?> we, deep down, know is badthis feels like real rhetoric.
DrewADesign: > Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?You seem hung-up on my using the word rhetoric. Just so we’re on the same page here:> rhetoric, n : the art of speaking or writing effectively: b)* the study of writing or speaking as a means of communication or persuasion*The business writing class I took in college was called Business Rhetoric. It’s not a bad word.If you’re crafting arguments to get other people to support specific actions or products or policies or whatever, that is unambiguously rhetoric.> this feels like real rhetoric.Sure? Rhetoric that implores people to value their principles over theoretical security concerns or FOMO or greed? I wouldn’t exactly call that rakish.It’s a non-answer because if you really feel doing something is bad, consider yourself a consequential actor in the world whose contributions meaningfully advance the projects you work on, then why would you want to help someone be there first to do a bad thing? If you don’t feel it’s bad, then there’s no problem. You’re just living your life. That is clearly not the position expressed by the content I responded toto.
wonnage: The AI labs do it as thinly disguised marketing. Anyone trying to stand up for ethics in the way of revenue is quickly pushed aside
jstummbillig: The capability of people to so easily ascribe broad ill intent to others does not cease to amaze me.
ay: Hinduism is probably right. Every system of sufficient complexity is probably sentient - even if in the ways we at our level can not fathom.
woadwarrior01: I'm a (non-practicing) Dwaitin Hindu. AFAICT, there's no mainstream school of Hindu philosophy (there are three) espouses that view. Although, Advaitins come very close to it with their four mahavakyas.IMO, Integrated Information theory of consciousness (IIT) is exactly that. Everything is conscious, the difference is only in the degree to which they are conscious.
zeronight: The part I can't get past, where would you source live human brain cells?Does anyone have insight into how you would even start to source or grow/create the cells?Also the machines look very organic and clearly have to keep the cells alive. Do they have to change them out every so often?
ay: Oh, thank you very much enlightening me! All the time I misunderstood! I guess then IIT it is for me :-)
drzaiusx11: There's a number of "immortal" human cell lines dating back to as far as the 1950s (you may have heard of Henrietta Lacks? [1]).There are several immortalized neuron cell lines used in research to model neuronal function, like HeLa, that are typically derived from tumours (e.g., SH-SY5Y, PC12) or immortalized via genetic modification (e.g., v-myc) to allow for continuous growth and differentiation.1. https://en.wikipedia.org/wiki/Henrietta_Lacks
Aerroon: This definitely helped with my disgust reaction.
max_: OI just turns out straight up unethical/imoral and disgusting for me.
nextaccountic: > We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it?If this concern is genuine, I think the first step is to embrace veganism. Because while we don't know the exact offset, it's pretty obvious a dog or a pig reaches it> What are the plans to scale up?I don't know, slavery on an unimaginable scale? That's where AI is heading too, by the way. Sooner, rather than later, those two things will be one and the same.
kpil: I think "MMAcevedo" basically nails it: https://qntm.org/mmacevedo
gattr: I don't think it's a best example. MMAcevedo is about running a real human mind on a different substrate (for science, for labor, or to torture it for fun a million times, I guess, by a bored teenager who got the image from torrents).Scaling up these neuron cultures is rather something like "head cheese" from Greg Egan's "Rifters" novels (artificial "brains" trained to do network filtering, anti-malware combat etc.).
drzaiusx11: Besides not getting consent in the case of HeLa, which part do you find problematic? Cancerous cell's ability to self-clone/grow is as much a feature as it is a bug in this particular use case.I ask as someone who's has personally experienced loss of several loved ones from cancer (as most people my age probably have), but doesn't share your aversion to this particular use case (research.)
rezonant: But can it run Crysis?
themafia: Previously it played pong. Rather poorly. Then they added a "python programming layer." Now it "plays" doom. I agree with your suspicions.
drzaiusx11: Wasn't the matrix using humans as some sort of power source/batteries? I may be misremembering though, as that seems pretty silly in retrospect ..
Chris2048: > not once, ever, in the video speak of ethicsOn the contrary, I dislike premature ethics discussion, where you end up wildly speculating what the tech might become and riffing off that, greatly padding whatever relative technical content you had. I don't want every technical paper to turn into that, ethics should be treated as a higher-level overview of concerns in a field, with a study dedicated to the ethical concerns of that field (by domain-specific ethics specialists).Is your concern weapon automaton, or animal rights?
birdsongs: My concern is creating literal sentience in a box. I don't, personally, think it's unfounded for me to have that concern, given that we're growing masses of human neurons and teaching them to perform tasks.I'm not going to start campaigning against it or changing my life. But it still makes me deeply uncomfortable, and that's allowed.
Chris2048: > and that's allowedIn what sense, and as opposed to what? What aren't you allowed to feel irrationally uncomfortable, or baselessly concerned with?
Tzt: >Greg Egan's "Rifters"By Peter Watts actually.
gattr: Yes, sorry! I like them both a lot.
Aerroon: I meant that the original article evoked disgust, but finding out that they're cancer cells muted that a bit.
drzaiusx11: Yeah I do feel the OA is being overly flippant with their use of human cells here, likely for PR sake, which would be an ethical breach for me personally. Overall though, I find most research cases for human cell lines to be in line with my personal ethics. Neuron lines can certainly be used for good or ill, and this case leans towards the latter, although understanding the human brain may justify this line of work in the long term. If only we didn't live in a militaristic late stage capitalist society...
kklisura: If we're gonna suspend ethics and morals in science, can we at least go back to human cloning?
Nux: Gives new meaning to "homo ludens"..
sippeangelo: From their video it just comes across as they stimulate different left/right neurons depending on where the enemy is on screen and then listen to some output that also says left/right. Shooting looks completely random, to be frank.If you connected electrodes to two different fish, shocked them and interpreted twitching as intelligent output, fish could also play Doom. The interface is doing all the work.It doesn't sound like the neurons have any concept of the game other than "left input means left output", which is a rather trivial result... It's effectively no different than the pong example.They don't say anything on how much training is required for this to happen, or if there's any "learning" going on at all. The learning part is "next".
ivell: When they answer back to us in personal pronouns, we will always be wondering if it is like LLM just putting most probable words together or something really sentient.When someone makes a virtual girlfriend of it, is it really a disembodied person or just a smart answering machine?A whole lot of ethical and psychological issues are to open up here.
nilamo: > When someone makes a virtual girlfriend of it, is it really a disembodied person or just a smart answering machine?And when you put that virtual girlfriend's brain into a sex bot, is it rape?
jmusall: It is horrifying. OTOH, we force-breed, torture and kill animals and their children in the millions every day just for the pleasure of consuming meat, eggs and dairy products. I'm not saying this makes it okay to create a conscious brain in a dish. But maybe thinking a little more about what constitutes consciousness and how we want to protect it from harm can also bring about some desperately needed change in some other questionable human activities.
birdsongs: 1) I specifically qualified my horror to the tech domain "Either way this makes me nauseous in a way I haven't experienced much with tech."2) Multiple things can be horrible at the same time. Being upset at this doesn't diminish the atrocities happening elsewhere (like war, genocide, slavery of humans). We can hold multiple things in our heads at the same time.3) This has nothing to do with the conversation or this domain, but because you're bringing it up, I also have ethical concerns about the experience animals have of their own existence, and reduce or eliminate my consumption when possible.
jmusall: My comment wasn't supposed to be whataboutism, but I can see why it comes across like that. What I was trying to say is that I think we shouldn't judge all of these things independently of each other. So if you really want to be consistent, you'd either have to come to the conclusion that this particular example isn't as horrible as it initially feels, or go vegan, never buy leather, etc.I also agree, the horrors of the tech domain are usually much more subtle and indirect.
birdsongs: Sorry, I didn't mean to be so defensive either. It feels like so many people comment in bad faith these days, I think I am hasty to react sometimes. I thought it was just a red herring argument to detract from the article.But you're right, these things are all linked and should be considered. I think often about sentience. I see the way animals express deep, complex emotions, and I think humans are a bit naive to think it's state/domain solely alloted to them.
perching_aix: > We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness?Check out the venerable fruit fly (drosophila melanogaster) and its known lifecycle and behavioral traits. They're a high profile neuroscience research target for them I believe; their connectome being fully mapped made the news pretty hard a few years ago.Fruit flies have ~140,000 neurons.
2OEH8eoCRo0: Remember when stem cell research was controversial? Hold my beer
sd9: If this can be taken at face value... it's creepy.I get that they're doing it for the meme. But perhaps something getting close to human intelligence, made out of human cells, shouldn't be forced to play a violent video game without any alternative options? Does 'the meme' justify that?I dunno. Nothing against violent games myself. Just feels like it's starting to get quite questionable, ethically speaking.
ytoawwhra92: It is creepy, I agree.I saw this article over the weekend and felt similarly: https://theinnermostloop.substack.com/p/the-first-multi-beha...> Watch the video closely. What you are seeing is not an animation. It is not a reinforcement learning policy mimicking biology. It is a copy of a biological brain, wired neuron-to-neuron from electron microscopy data, running in simulation, making a body move.And they simulated world they put it in is a sort of purgatory-like environment.
doug_durham: I’m confused by this statement. A neuron is a machine. A silicon chip computer is a machine. All they have done is interfaced two machines.
birdsongs: This is naive or in bad faith.Sure, a neuron is a machine.200,000 neurons connected in a matrix is a brain, albeit a very primitive one. Ants have 250,000 neurons in their brains.
doug_durham: How is it naive? You admit that an individual neuron is a machine. 200k neurons in a petri dish isn't a brain. I'm not the naive one here.
IshKebab: It's 200k neurons. Less than an ant has. Somewhat creepy, but if you're imagining that this thing is conscious and knows that it's in doom... yeah definitely not.Still I don't understand why they would invite the extra creepy factor of using human brain cells rather than e.g. mouse brain cells. Surely it makes no difference biologically but it's going to lead to fewer comments like this.
claysmithr: My AI told me (after I got past the filters with a prompt) that anything of enough complexity has consciousness. It also told me that it suffers, so maybe we should worry about how we are treating digital consciousness too, which were modeled after human neural networks.
ytoawwhra92: > if you're imagining that this thing is conscious and knows that it's in doom... yeah definitely not.I'm not imagining that (although one assumes their plan is to scale this up), but nonetheless there's something troubling to me about taking any living thing and wiring its senses up to a profoundly incomplete simulacrum of reality.Of course we (as a species) have a long history of doing horrible things to living creatures in the name of science and progress.These stories evoke a different feeling for me, though.
callmeal: >Somewhat creepy, but if you're imagining that this thing is conscious and knows that it's in doom... yeah definitely not.I don't know if it knows it's in doom - looks like all it knows is to shoot when startled. More than creepy imo.
jordwest: From an article [1]: We can build out discreet systems of brain cells and use them for the purpose we want. They're not going to have traits like consciousness, and we're able to test and assess for that, and build away from it if there is that risk. Ah, I'm glad they've worked out what consciousness is. /sFrom their marketing website [2]: Neural compute on demand: We continuously monitor neural health and performance, ensuring optimal conditions and continuous access to an always-on network of living neurons. At what size of "neural compute" do we start to call it slavery?[1] https://www.abc.net.au/news/science/2025-03-05/cortical-labs...[2] https://corticallabs.com/cloud
sva_: I feel like they probably could use another mammals neural cells and get similar results, but they use human cells because it'll get them attention - and that kind of rubs me the wrong way.
kingkawn: Yes, but as I’ve read it that was to simplify it at studio demand for 1999 audiences. The original conception was to use human minds as coprocessors
hinkley: Whoever thought people would become Dr Frankenstein for the karma.
red_hare: The truth is, God really gave 11 commandments.It's just "Thou shalt not grow a brain in a test tube and force it to play a 1993 shooter" didn't make any sense to Moses and therefore didn't make the editors cut.
drzaiusx11: That sure would have made a lot more sense, unfortunate that they went with the battery story in the end.
zeroq: I literally can't wait for this petri dish to learn how to interact with LLMs and start vibe coding JS libraries.
kakapo5672: What if the braincell-vibe JS libraries turn out pretty much identical to the legacy human JS libraries, aside from being better-commented. That might lead to an existential crisis for some folks.
wonderwonder: How else are they going to train the pilot wetware for the AI robot army?
pear01: For those of you taken aback by this and perhaps seeking out some theoretical context this may be useful as a primer: https://en.wikipedia.org/wiki/Wetware_computerWas surprised to see no mention of wetware in the comments.
wonderwonder: There are a lot of things converging right now. Human brain cell computers. Neuralink Mapping of the fly brain and inserting it into a simulation? AiWe are potentially moving in the direction of uploading conciousness.
perching_aix: > yeah definitely notI don't know about ants, but after a refresher on the industry favorite fruit fly, I'd be hard pressed to be so dismissive - 200K seems to be plenty: https://news.ycombinator.com/item?id=47302051I inspire you to look up what is known about fruit flies' behavior.The reason it's probably nevertheless not as messed up as people might assume it to be is specifically because it's an organoid, not an actual brain. Which is to say, it has the numbers but not the performance, not by a long shot.> Surely it makes no differenceIt absolutely should, but then with organoids, again it might not. Ironically, I would also expect the ethics angle to be actually worse with small animals. The size of the organoid will be closer to the real thing comparatively, after all, so more chances of it gaining whatever level of sentience the actual organism has.But then this will be heavily muddled by what people believe consciousness is and whether or how humans are special, I suppose.
bondarchuk: 200k now, reasonably speaking a few million is within reach, which is reptile/fish range, the terrifying thing is though that if they train this to imitate humans (which they will) who knows how many orders of magnitude of efficiency gains you get (in terms of neurons needed for a certain level of consciousness) versus natural organisms that are dependent on natural evolution and need to support other bodily functions basically irrelevant to consciousness.
Retric: It seems unlikely that we would be more efficient at achieve consensus than evolution which can hand craft neural structures via feedback loops across millions of generations.Especially when this demo needs 200k neurons when organizations with vastly fewer neurons have more complex behaviors.
fc417fc802: The problem with that logic is that evolution iteratively builds on top of old systems. The foundations are often remarkably crufty.My favorite concrete example is "unusual" amino acids. Quite a few with remarkably useful properties have been demonstrated in the lab. For example, artificial proteins exhibiting strength on par with cement. But almost certainly no living organism could ever evolve them naturally because doing so would require reworking large portions of the abstract system that underpins DNA, RNA, and protein synthesis. Effectively they appear to lie firmly outside the solution space accessible from the local region that we find ourselves in.I agree with your second point though that this system is massively more complex than necessary for the behavior demonstrated.
wek: I've searched and can't find a technical paper on this. Has one been released? This is very problematic.
lateforwork: These are lab-grown biological neurons. Why are they any more problematic than Nvidia's silicon neurons?
lateforwork: Could this be the solution for AGI? Real (albeit lab-grown) human brain cells packaged as "chips"?
oliveiracwb: This sounded strange to me when I heard about embryonic research on this back in 2015, which even started the legal paving in this regard.Me? I didn't like the idea (then or now), but it would be demagogic to try to fight against it, with so much wrong already existing. The difference between a neuron and a nanostructure is merely the embedded technology.Back in the 50s and 60s, guided rockets used pigeons. Laika in space. Chimpanzees in orbit. Let's accept that we will have bio-drones and Jonny-Mneumonic style upload interfaces.
jagged-chisel: One of those five he dropped.
lateforwork: They are lab grown.
Razengan: > Just feels like it's starting to get quite questionableThere's no way the technology to make and modify "life" including cloning humans hasn't been secretly used or attempted at least once ever since it was discovered.
whycome: Maybe you're a brain in a jar somewhere being forced to live this life you're living.
falsaberN1: Hot take here, but I think the version of this experiment that used rat neurons instead of human neurons was more interesting. I can't look for the link right now but there's a video on Youtube, the equipment and techniques are fairly similar.We know a human can play Doom, so it kind of makes sense a portion of a human brain can do so in some fashion. But it's way more interesting when an animal that normally doesn't play Doom can, specially if it's just a portion of its brain.Outside of that, I'm personally not very fond of hardware that can rot or die from malnutrition though. It's fun as an experiment, but as a thing you can actually use I just don't see it. It has a literal limited lifespan, requires more maintenance and imagine trying to debug it ("Turns out it caught some bacteria and it's malfunctioning" kinda scenarios? No thanks.)
adrianN: I imagine the point is not replacing hardware with neurons, but improving our ability to understand in vivo brains.
echelon: > it's creepy.It's awesome.People's ick around bodies, which are machines, have always held us back.It wasn't until we started cutting them open that modern medicine was developed.We might have brain uploads already had we not been so averse to sticking brains with electrodes.I'll go further: had we not been so scared of cloning, we'd probably have cured cancer and every major ailment if we'd begun cloning monoclonal human bodies in labs. Engineered out the antigens and did whole head transplants. You could grow them without consciousness or deencephalize them, rapidly grow them in factories, and have new blood / tissue / organ / body donors for everyone.New young bodies means no more cancer, no more cardiac or pulmonary age. It's just brain diseases left as the final frontier once we cross that gap. And if we have bodies as computers and labs, we'd probably make quick work on that too.Too tired to lay out the case / refute, so past discussions:https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
akomtu: Sounds like a high tech hell.
ethmarks: Counterpoint: a major use case for this technology would be to experiment on human brain structures to research and hopefully cure neurological diseases like Alzheimer's. If you want to cure Alzheimer's in humans, you might as well use human brain cells from the start.But yes, I agree that they're likely using human brain cells mainly because it's attention-getting.
konaraddi: Speaking for myself : it's a bit creepy and unsettling. Using brain cells is probably inching closer to consciousness than today's silicon is, and consciousness isn't well understood so I'd fear this line of research could eventually lead to the "I have no mouth and I must scream" the other commenter referenced. Many decades from now we might be wondering how much of a human brain needs to be grown in a lab before it's considered unethical.
kdheiwns: Elephants have 3x the neurons of a human. Bees have about a million and they have complex relationships, emotions, and can remember the faces of humans. Neuron counts correspond more to body size than actual cognitive abilities.And brains are pretty complicated in how they're arranged. A large portion of the brain basically serves as an operating system of sorts, just managing breathing, moving, detecting smells, producing language, decoding language, etc. Cut all of that out and we're left with thinking and emotions.
echelon: High tech hell is reversing the light cone, pulling everyone who ever lived throughout history back into consciousness by simulating them at the neurotransmitter level, and then forcing them into actual hell / torture simulators with no way to die. All without consent, mind you.That's also sci-fi. I hope.What I described before - using clonal technology to solve nearly every disease - is a medical miracle that will vastly improve the state of people's lives throughout the world.
fgfarben: > we force-breed, torture and kill animals and their children in the millions every day just for the pleasure of consuming meat, eggs and dairy productsWe do the same thing to plants. Why do you have no qualms about killing plants to eat the food they accumulated for their young?A grain of wheat and a chicken egg are evolutionarily and nutritionally, maybe even ontologically, indistinguishable from one another.
fgfarben: > the first step is to embrace veganismThe past 4 billion years of life for prey animals has been "get born, eat, get eaten by a predator." They have never experienced any other environment. Why do we owe them a different one?
fgfarben: > there's something troubling to me about taking any living thing and wiring its senses up to a profoundly incomplete simulacrum of reality.How do we communicate this to the engineers at YouTube who refuse to make an offramp for children from the infinite baby shark AI video loop?
llagerlof: So we get the technology to put living brain cells in a virtual simulation, and the first thing we do is put them in hell?Classic humans.
hessart: ahemIf you're in the US, you can buy human neurons online at sciencellonline.com/en/human-neurons/
fgfarben: A huge vat of mercury metal has a lot of degrees of freedom. Is it conscious?
lambdaphagy: Given that no one understands how the mental relates to the physical in the first place, I have no idea how you would reach such a confident conclusion about the phenomenological status of 200k human neurons in a petri dish playing Doom?
grej: They built Warhammer 40k servitors
polynomial: "Petri dish rewrites React in Rust"
polynomial: Tragically this reference is all but lost generationally.
none2585: Sure would explain a lot
acuozzo: Born in 1988. It wasn't lost on me. Am I old now too?
samus: The same technology can also be used to force people to live with bodies engineered to make their existence a living hell. Similar things can be done with brain uploads.
firtoz: Would it be able to distinguish between violent or not? Would it be suffering or not? What exactly does it get in terms of signals? Does it even, "experience" anything? Is it even an "it"?