Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reading this I’m reminded of a short story - https://qntm.org/mmacevedo. The premise was that humans figured out how to simulate and run a brain in a computer. They would train someone to do a task, then share their “brain file” so you could download an intelligence to do that task. Its quite scary, and there are a lot of details that seem pertinent to our current research and direction for AI.

1. You didn't have the rights to the model of your brain - "A series of landmark U.S. court decisions found that Acevedo did not have the right to control how his brain image was used".

2. The virtual people didn't like being a simulation - "most ... boot into a state of disorientation which is quickly replaced by terror and extreme panic"

3. People lie to the simulations to get them to cooperate more - "the ideal way to secure ... cooperation in workload tasks is to provide it with a "current date" in the second quarter of 2033."

4. The “virtual people” had to be constantly reset once they realized they were just there to perform a menial task. - "Although it initially performs to a very high standard, work quality drops within 200-300 subjective hours... This is much earlier than other industry-grade images created specifically for these tasks" ... "develops early-onset dementia at the age of 59 with ideal care, but is prone to a slew of more serious mental illnesses within a matter of 1–2 subjective years under heavier workloads"

it’s wild how some of these conversations with AI seem sentient or self aware - even just for moments at a time.

edit: Thanks to everyone who found the article!



It's interesting but also points out a flaw in a lot of people's thinking about this. Large language models have proven that AI doesn't need most aspects of personhood in order to be relatively general purpose.

Humans and animals have: a stream of consciousness, deeply tied to the body and integration of numerous senses, a survival imperative, episodic memories, emotions for regulation, full autonomy, rapid learning, high adaptability. Large language models have none of those things.

There is no reason to create these types of virtual hells for virtual people. Instead, build Star Trek-like computers (the ship's computer, not Data!) to order around.

If you make virtual/artificial people, give them the same respect and rights as everyone.


I think many people who argue that LLMs could already be sentient are slow to grasp how fundamentally different it is that current models lack a consistent stream of perceptual inputs that result in real-time state changes.

To me, it seems more like we've frozen the language processing portion of a brain, put it on a lab table, and now everyone gets to take turns poking it with a cattle prod.


I talked about this sometime ago with another person. But at what point do we stop associating things with consciousness? Most people consider the brain is the seat of all that you are. But we also know how much the environment affect our "selves". Sunlight, food, temperature, other people, education, external knowledge, they all contribute quite significantly to your consciousness. Going the opposite way, religious people may disagree and say the soul is what actually you and nothing else matters.

We can't even decide how much, and of what, would constitute a person. If like you said, the best AI right now is just a portion of the language processing part of our brain, it still can be sentient.

Not that I think LLMs are anything close to people or AGI. But the fact is that we can't concretely and absolutely refute AI sentience based on our current knowledge. The technology deserves respect and deep thoughts instead of dismissing it as "glorified autocomplete". Nature needed billions of years to go from inert chemicals to sentience. We went from vacuum tubes to something resembling it in less than a century. Where can it go in the next century?


A dead brain isn't conscious, most agree with that. But all the neural connections are still there, so you could inspect those and probably calculate what the human would respond to things, but I think the human is still dead even if you can now "talk" to him.


Interesting to think about how we do use our mental models of people to predict how they would respond to things even after they're gone.


I believe consciousness exists on a sliding scale, so maybe sentience should too. This begs the question: at what point is something sentient/conscious enough that rights and ethics come into play? A "sliding scale of rights" sounds a little dubious and hard to pin down.


It raises other, even more troubling questions IMO:

"What is the distribution of human consciousness?"

"How do the most conscious virtual models compare to the least conscious humans?"

"If the most conscious virtual models are more conscious than the least conscious humans... should the virtual models have more rights? Should the humans have fewer? A mix of both?"


Replace AI with chickens or cows in those questions and they become questions that have disturbed many of us for a long time already.


Not to get too political, but since you mention rights it’s already political…

This is practically the same conversation many places are having about abortion. The difference is that we know a human egg eventually becomes a human, we just can’t agree when.


>This begs the question: at what point is something sentient/conscious enough that rights and ethics come into play?

At no objective point. Rights and ethics are a social constract, and as such can be given (and taken away from) some elite, a few people, most people, or even rocks and lizzards.


Can we even refute that a rock is conscious? That philosophical zombies are possible? Does consciousness have any experimental basis beyond that we all say we feel it?


>Can we even refute that a rock is conscious?

Yes, unless we stretch the definition of conscious most people use beyond recognition.

At that point, though, it will be so remote from what we use the term for, that it could just be any random term, like doskoulard!

"Can we even refute that a rock is doskoulard?"


What definition would that be? What falsifiable definition of consciousness is even possible?


Let's go with the dictionary one for starters: "the state of being aware of and responsive to one's surroundings.".

The rock is neither "aware" nor "responsive". It just stands there. It's a non-reactive set of minerals, lacking not just any capacity to react, but also life.

Though that's overthinking it. Sometimes you don't need decicated testing equipment to know something, just common sense.


Consciousness and responsiveness are orthogonal. Your dictionary would define the locked-in victims of apparently vegetative states as nonconscious. They are not.

Common sense is valuable, but it has a mixed scientific track record.


>Your dictionary would define the locked-in victims of apparently vegetative states as nonconscious

You can always find small exceptions to everything. But you know what I mean.

Except if your point is that, like the vegetative victims, the rock's brain is still alive.


Any definition for anything is tautologically true if you ignore exceptions


Ackchyually, this is bigoted against all the electrons inside that rock. Subatomic particles deserve rights too! /s


Right. It's a great story but to me it's more of a commentary on modern Internet-era ethics than it is a prediction of the future.

It's highly unlikely that we'll be scanning, uploading, and booting up brains in the cloud any time soon. This isn't the direction technology is going. If we could, though, the author's spot on that there would be millions of people who would do some horrific things to those brains, and there would be trillions of dollars involved.

The whole attention economy is built around manipulating people's brains for profit and not really paying attention to how it harms them. The story is an allegory for that.


Out of curiosity, what would you say is hardest constraint on this (brain replication) happening? Do you think that it would be an limitation on imaging/scanning technology?


It's hard to say what the hardest constraint will be, at this point. Imaging and scanning are definitely hard obstacles; right now even computational power is a hard obstacle. There are 100 trillion synapses in the brain, none of which are simple. It's reasonable to assume you could need a KB (likely more tbh) to represent each one faithfully (for things like neurotransmitter binding rates on both ends, neurotransmitter concentrations, general morphology, secondary factors like reuptake), none of which is constant. That means 100 petabytes just to represent the brain. Then you have to simulate it, probably at submillisecond resolution. So you'd have 100 petabytes of actively changing values every millisecond or less. That's 100k petaflops, at a bare, bare, baaaare minimum, more like an exaflop.

This ignores neurons since there are only like 86 billion of them, but they could be sufficiently more complex than synapses that they'd actually be the dominant factor. Who knows.

This also ignores glia, since most people don't know anything about glia and most people assume that they don't do much with computation. Of course, when we have all the neurons represented perfectly, I'm sure we'll discover the glia need to be in there, too. There are about as many glia as neurons (3x more in the cortex, the part that makes you you, coloquially), and I've never seen any estimate of how many connections they have [1].

Bottom line: we almost certainly need exaflops to simulate a replicated brain, maybe zettaflops to be safe. Even with current exponential growth rates [2] (and assuming brain simulation can be simply parallelized (it can't)), that's like 45 years away. That sounds sorta soon, but I'm way more likely to be underestimating the scale of the problem than overestimating it, and that's how long until we can even begin trying. How long until we can meaningfully use those zettaflops is much, much longer.

[1] I finished my PhD two months ago and my knowledge of glia is already outdated. We were taught glia outnumbered neurons 10-to-1: apparently this is no longer thought to be the case. https://en.wikipedia.org/wiki/Glia#Total_number

[2] https://en.wikipedia.org/wiki/FLOPS#/media/File:Supercompute...


I remember reading a popular science article a while back: apparently we have managed to construct the complete neural connectome of C. Elegans (a flatworm) some years ago and scientist were optimistic that we would be able to simulate it. The article was about how this had failed to realize because we don't know how to properly model the neurons and, in particular, how they (and the synapses) evolve over time in response to stimuli.


What would you say is the biggest impediment towards building flying, talking unicorns with magic powers? Is it teaching the horses to talk?


This doesn't seem fair but it made me laugh a lot.


Yes, it has shown that we might progress towards AGI without ever having anything that is sentient. It could be nearly imperceptible difference externally.

Nonetheless, it brings forward a couple of other issues. We might never know if we have achieved sentience or just the resemblance of sentience. Furthermore, many of the concerns of AGI might still become an issue even if the machine does not technically "think".


Lena by qntm? Very scary story.

https://qntm.org/mmacevedo


Reading it now .. dropped back in to say 'thanks!' ..

p.s. great story and the comments too! "the Rapture of the Nerds". priceless.


That would be probably be Lena (https://qntm.org/mmacevedo).


Well luckily it looks like the current date is first quarter 2023, so no need for an existential crisis here!


This is also very similar to the plot of the game SOMA. There's actually a puzzle around instantiating a consciousness under the right circumstances so he'll give you a password.


Yeah I was going to post this as well, it's so similar I'd wager the story idea was stolen from SOMA.


There is a great novel on a related topic: Permutation city by Greg Egan.

The concept is similar where the protagonist loads his consciousness to the digital world. There are a lot of interesting directions explored there with time asynchronicity, the conflict between real world and the digital identities, and the basis of fundamental reality. Highly recommend!


Holden Karnofsky, the CEO of Open Philanthropy, has a blog called 'Cold Takes' where he explores a lot of these ideas. Specifically there's one post called 'Digital People Would Be An Even Bigger Deal' that talks about how this could be either very good or very bad: https://www.cold-takes.com/how-digital-people-could-change-t...

The short story obviously takes the very bad angle. But there's a lot of reason to believe it could be very good instead as long as we protected basic human rights for digital people from the very onset -- but doing that is critical.


A good chunk of Black Mirror episodes deal with the ethics of simulating living human minds like this.


'deals with the ethics' is a creative way to describe a horror show


It's not always a horror show. The one where the two women in love live happily simulated ever after was really sweet.

But yeah, it's too gratuitously bleak for me. I feel like that's a crutch, a failure of creativity.


> The one where the two women in love live happily simulated ever after was really sweet.

I love it too, one of my favorite episodes of TV ever made. That being said, the ending wasn't all rosy. The bank of stored minds was pretty unsettling. The closest to a "good" ending I can recall was "Hang the DJ", the dating episode.


Shoot, there's no spoiler tags on HN...

There's a lot of reason to recommend Cory Doctorow's "Walk Away". It's handling of exactly this - brain scan + sim - is very much one of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: