Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One assumption people often take for granted about consciousness is that everyone is conscious. I agree we should operate under that assumption for the purposes of making ethical decisions, but I think we should challenge it for the purpose of trying to understand consciousness better. What if philosophical zombies aren't just a hypothetical thought experiment, what if some people are conscious and others only pretend to be conscious? Is there some particular event that triggers consciousness?

Lacan thought that consciousness is triggered by looking in a mirror (or something equivalent to a mirror). If someone was carefully raised without the ability to look in a mirror, see their own shadow, hear their own voice, etc., would they never become conscious? How could you tell?

What if consciousness is triggered by something totally unexpected, like: circumcision; submersion baptism; chicken pox; or some particular bacteria in my gut? I can find someone who never had chicken pox and ask them if they're conscious, but how do I know if they're answering truthfully?

Everyone has a big incentive to profess consciousness, because anyone who professed non-consciousness would be in danger of losing the privileges and protections which society grants to conscious people.



In my opinion the only value in this idea is that it highlights the absurdity of trying to apply an empirical model on an un-empirical concept. Since "consciousness" can not be measured, and indeed is difficult to even define in words, it is excluded by any model which insists upon an objective reality.

To your point: how do we know that all people posses consciousness? We don't. We make that assumption because other people are like us. The less like us something is, the less likely we are to assume it has consciousness. For most of human history animals were not afforded this assumption and that is only now starting to change because, as it turns out, animals are a lot more like us than we like to admit.

In other words, it's speciesism. The whole discussion about what does and doesn't have conscious is a desperate attempt to justify human exceptionalism.


>We make that assumption because other people are like us.

Yes, and I think this is a mistake, because it shuts down any hope of isolating what it is about us that makes us conscious. Perhaps this will change as human-lookalike-robot technology gets better, breaking down the "looks like me, must be conscious like me" argument.

People are starting to grant animals the rights of consciousness, but let me ask, what about sperm? I myself was sperm once, and over time I became a full-grown human being. If consciousness is a boolean, then at what exact moment did I become conscious? Was it when I reached the egg? When the first neuron in my brain formed? When my umbilical cord was severed? When I first recognized myself in a mirror [Lacan]?


> If consciousness is a boolean, then at what exact moment did I become conscious?

There's a simpler answer: there was never a state in which you were not conscious. And yes, that would apply to literally everything in the universe, and in every grouping of such things imaginable, and in fact 'you' are neither a single entity, nor a gestalt of several, nor merely a component of another, 'you' are all these things at once.

But the real point I'm trying to make here is that these questions are literally meaningless if you insist upon empiricism because they are untestable.


"We make that assumption because other people are like us."

Then how do we know that we ourselves are conscious?


We don't and can't. Because we can't even come up with a universally accepted definition, there can be no bright-line test.

Coupled with our innate arrogance, where we allow ourselves to "just know that we are", just like we are pretty sure that we get to exert "free will", you end up with a lot of sloppy thinking. I'm not claiming to have any answers (I'm more of an intentionally extreme skeptic of the answers I come across), but I don't think you can deny that there is a lot of sloppy thinking (esp. on a layman's board like this) around "consciousness", "intelligence" and "free will".


I think a lot of this stems from a trick of sorts. Nothing that we are is "magically" consciousness. If we had a machine and emulated every aspect of a "mind", would it be conscious? If you're answering no - then why? What's missing? I think the answer is: the only thing that's missing is that you couldn't possibly believe that that thing could be conscious. And this is simple because you believe that you are "conscious" to an extent which you really are not.

In other words, we think we're this thing called "conscious", but that thing isn't what we think it is!

When you look a bunch of different things: the left brain/right brain separation and the prefrontal cortex - you realize that you're not even just one brain, but many? Which one is you? And I seem to recall an article recently that challenged the existence of an unconscious. How does that relate to this?


Consciousness is not an outward product but an inward one. We are, in large part, deterministic. Our genetics and other factors can determine an increasingly large amount about us long before we're even self aware. Yet, for whatever reason, we all (though I suppose as per the GP comment - that is an assumption) have this inner dialog and observer who is not only watching every single thing as time gradually elapses, but also feels as though it is the one that is running the show.

Imagine you write a program to determine a pseudorandom number. You obviously don't imagine some entity puffs into existence, imagines itself picking the random number which our RNG already independently chose, and then puffs out of existence afterwards. Yet why would this somehow suddenly become true as the program became more complex? It requires extensive handwaving and speculation. Even if you somehow wrote similar pointless inner dialog mechanics into it, would something puff into existence and perceive itself then running those mechanics? I don't see any way you can answer yes to this question without, again, resorting to handwaving and speculation.


The p-zombie concept always struck me as stupid solipsism. If you can't tell the difference doesn't that mean there is no meaningful difference?

Likewise if they fear the loss of priviledge that implies some degree of consciousness /somewhere/. If some absurd set of state and pseudorandom random number generator capable of passing every metric of consciousness in response to inputs then it is a consciousness even if it is made of a bizzare set of equations and state.

Anyway for the consciousness somewhere - take a hypothetical hyperintelligence or supercomputer capable of simulating a human brain completely calculation by calculation in various events like say being flayed alive. It isn't torturing anybody because the actions are simply calculations that it itself is running. The victim may not be real but there is a real intelligence somewhere behind it - and it may or may not care about the simulated suffering. Where it is "run" is material like the difference between acting out a murder and actually murdering someone.


> The p-zombie concept always struck me as stupid solipsism. If you can't tell the difference doesn't that mean there is no meaningful difference?

That's a very strange argument. If you hold your nose and you taste a piece of potato and then a piece of apple, you can't tell the difference. So now suddenly the difference is meaningless because the sense required to tell the difference is removed from the equation? The apple is still (in a, to you, non-observable reality) an apple, regardless of how your perception changed.


There is a big difference between "don't know" under specific set of circumstances and "can't know" ever.

There is obviously a difference between an apple and a potato (and in more aspects than just smell).


I still think the difference is there even if we can’t know it. It’s the whole tree-woods-nobody thing, isn’t it? Of course the tree makes a sound. If all intelligent life in the universe was wiped, a falling rock would still make a sound.


But in the case of the potato and apple, one has the ability to unplug the nose and tell the difference. If no one had any senses that could distinguish them under any circumstances, then it would be different.

This comes down to something akin to Einstein's problem with quantum physics, that it didn't make aesthetic sense to have something be fundamentally random. That it is the same, and philosophically preferable, to say that something is fixed, but cannot be measured, than to say it is actually random.

Now, I hear that with quantum stuff they somehow have proved it to be fundamentally random, but the point is that if you really can't tell in any way, the difference doesn't matter at all. At least to a reductionist viewpoint. I feel that philosophies about "what if reality is a simulation or a dream?" are dumb for the same reason. Unless there's a way to wake up, who cares?


> Now, I hear that with quantum stuff they somehow have proved it to be fundamentally random, but the point is that if you really can't tell in any way, the difference doesn't matter at all.

Someone said that quantum should not be brought up in discussions about consciousness because it’s way too easy to misconstrue the quantum maths about probabilities and observations as somehow relevant on macro scale.


But we don't know that we can't ever possibly test consciousness, we just don't know how to test it yet.

Before Archimedes, the problem of determining the purity of a golden crown was intractable. Archimedes' solution was not arrived at by brute force concentrating on the problem, but rather by an epiphany (leading to the famous story of him shouting "Eureka" and running out of the house naked). https://en.wikipedia.org/wiki/Eureka_(word)#Archimedes

It's possible that someday someone like Archimedes will realize a way to test consciousness, and it'll be something ridiculously simple (like Archimedes' submerging the crown in water and seeing how much water it displaces), and we'll all kick ourselves for not thinking of it first :)


We are all not-conscious for large swaths of the day. We think we are conscious all of the time, because it is only when we are conscious that we think to think about it. So we subconsciously maintain a kind of linked-list of conscious periods, and so the illusion of permanent/static consciousness persists. But there are gaps between every conscious period, which are scarcely noticeable unless you have figured out to look for them. (It is a paradox of the mind that there can be awareness during non-conscious periods).


+1. Here's a trick you can use to trip yourself out (also works as a party trick to trip other people out). Look in a mirror. Look at your left eye. Look at your right eye. Look at your left eye. Look at your right eye. You won't see your eyes moving; it is as if your eyes are holding still. But to someone watching you, your eyes make very visible movements whenever you switch which eye you're looking at.

It's an example of "Chronostasis", which is very much like you describe, only perhaps at a much smaller granularity. https://en.wikipedia.org/wiki/Chronostasis.

This gif alone speaks a trillion words if you study it closely: https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Ch...


It isn't quite this. We have memories for only part of the day. We have no memories for other parts. We don't know whether we are conscious at these times. It isn't that we know we are unconscious when we are asleep except inasmuch as the term "unconscious" is sloppy and applies to whatever we are when we are asleep and the hypothetical state of existing without consciousness. These are two things we don't have direct experience with and which we assume are the same.


To this day I still can't say 100% that I'm fully conscious during some deep hacking session when I'm in the zone.


Aren’t you conscious of the code and what the code is supposed to be doing?


Usually I code in phases, figuring out the solution, which requires a lot of thinking and expermenting and trials and errors, but once I find what I think is a promising solution I might end up writing lots of code; in this phase I sort of feel that the code comes out of me without a clear conscious effort. This phase might last a few hours or a few days and I'm often limited by my typing speed and the responsiveness of my editor; in a away the code on the screen becomes part of my mental process (similarly sometimes I have to scribble on a piece of paper just to get my mental processes runnin). After that, I snap out and start compiling the code (I program in c++, so 10s of thousands of lines of errors are routine), clean it up and usually end up deleting large chunks of code. Note I'm fully aware of the code I have written but have little recollection of the actual act of writing. After this I might switch to write tests (which Is a much more conscious activity) or move back to phase one.

I'm fully aware that's not how most developers work, according to my boss, when I tell him I haven't compiled my code in a few days, I'm just strange.


> I'm just strange.

Maybe. Who am I to judge?


I think we're more likely to make intellectual progress on consciousness by either redefining it in a more objectively rigorous way, or (more likely IMO) abandoning it as a philosophical construct analogous to "the soul" and focusing research on a subset of phenomena that can be rigorously defined.

It seems to me like discussions of "consciousness" here on HN seem to frequently devolve into arguments over the semantics of that particular word. That feels more philosophical than scientific to me.


Because the philosophical part is the only reason we care what consciousness is. Otherwise we are just talking about, what, perceptive abilities? Reactions to stimuli? That's all well and good, but we can't as easily use that to justify our enslavement and/or slaughter of other beings.


Obviously it's absurd to think not everyone is conscious the same way we are, but I don't think you need to assert that for the hard problem of consciousness to exist. It's enough that I (whoever I am) am conscious, and no one can lead me to doubt that, although they can cast confusion on the terms I use to describe it.


Is a dog conscious? Dogs sense and categorize; dogs feel emotions; dogs form conclusions about their reality; dogs sleep and die just like we do.

What do you mean by "consciousness", if it includes humans but excludes dogs? And if it includes dogs, does it include insects? Plants?

We need to be reasonably specific when discussing these things.


I think there are two different things called consciousness. The first is awareness of your surroundings. Yes, your dog is conscious, unless asleep. And even then it's conscious to some degree, because it can be awakened by external stimulus.

The second kind of consciousness is being aware of your awareness - being able to watch your mind work. To our knowledge, dogs don't have that. Nobody does but humans, so far as we know. The problem is, my definition here is a purely internal, subjective one. I can't prove to you that I am conscious in the second sense; I can't prove to you that anyone else is or is not. All I know is that this is something my own mind can do, and maybe I can describe a bit of what it's like to have my mind do it. That's not much to go on for further investigations.


To state, that humans are able to watch their mind work is simply wrong.

People who argue with the chinese room argument, are unable to prove, that they're not a "chinese room" themself.


> To state, that humans are able to watch their mind work is simply wrong.

Given that I experience doing so, refuting it takes a bit more than claiming that it's simply wrong.


Demonstratably, you don't actually experience doing so. When a solution to a problem you were thinking aloud about yesterday suddenly pops up in you mind today, you have 0 idea how your brain came up with it.

What you experience is only the pop ups.


Yes, I have had that experience. And how do we know it just "popped up"? Because we can watch our own consciousness, and we can see that it did not originate at the level of conscious thought.


So you mean unless you think an idea in words, you are not doing anything conscious? Does that mean I play a real-time video game entirely unconsciously? Because I don't think "I need to go there and do this" in words, I just do it.


Seen from my perspective this doesn't make you any more different than the chinese room, as you cannot prove your claim to observe your very own mind while thinking.


Am I conscious? How would I know?

If I'm capable of fooling everyone else, can I be fooling myself?


You can’t be fooling yourself into thinking you see color or feel pain. What would that even mean?


> others only pretend to be conscious

> how do I know if they're answering truthfully?

> anyone who professed non-consciousness

The ability to lie seems to me to imply at least some level of consciousness. How could/why would you deceive others if you have no concept of your own existence?


I can easily write a computer program that tells lies, that doesn't mean it's conscious.


Yes, but the key ingredient for human lying is a theory of mind, and a theory of mind is difficult to formulate without your own consciousness to generalize from. To get to even the motive for lying in the first place, you'd need awareness.

For a philosophical zombie, you'd need this behavior to exist independent of the zombie having a theory of mind and conscious awareness which they can use to reason about the state of another's awareness. That's a lot leaps of faith to take.


This "theory of mind" can be called "imagination" in a limited form. Allowing our conceptual self to act and predict what will happen is key to "consciousness".

We don't need to die by walking off a cliff, if we can have a conceptual version of ourself walk off, imagine the result of walking off the cliff and choose not to do it.

Yes this expands consciousness to animals, but I doubt it goes much farther than that. I think it fits.


You can write a computer program that returns incorrect information, I think that's the extent of it.


Can you prove to me that you're not an elaborate bot programmed to post on Hacker News?


This is the sort of idea that would appeal to xenophobes, racists, and others of their ilk. Not calling you one--not at all--but its the sort of idea that seems quite dangerous in the wrong hands.


I think you're making a mistake here. The mistake is to think that something/someone could very convincingly _seem_ to be conscious but somehow not actually be conscious. I would argue that, beyond a certain point, there is no difference.

The Turing Test is a good tool to roll out in these sorts of arguments. People often mention the Turing Test, but have you ever stopped to think how good a conversation would need to be to _convincingly_ pass it?

Dennett gives an imaginary example of a Turing Test conversation in his book Consciousness Explained:

Judge: Did you hear about the Irishman who found a magic lamp? When he rubbed it a genie appeared and granted him three wishes. “I’ll have a pint of Guiness!” the Irishman replied and immediately it appeared. The Irishman eagerly set to sipping and then gulping, but the level of Guiness in the glass was always magically restored. After a while the genie became impatient. “Well, what about your second wish?” he asked. Replied the Irishman between gulps, “Oh well, I guess I’ll have another one of these.”

CHINESE ROOM: Very funny. No, I hadn’t heard it– but you know I find ethnic jokes in bad taste. I laughed in spite of myself, but really, I think you should find other topics for us to discuss.

J: Fair enough but I told you the joke because I want you to explain it to me.

CR: Boring! You should never explain jokes.

J: Nevertheless, this is my test question. Can you explain to me how and why the joke “works”?

CR: If you insist. You see, it depends on the assumption that the magically refilling glass will go on refilling forever, so the Irishman has all the stout he can ever drink. So he hardly has a reason for wanting a duplicate but he is so stupid (that’s the part I object to) or so besotted by the alcohol that he doesn’t recognize this, and so, unthinkingly endorsing his delight with his first wish come true, he asks for seconds. These background assumptions aren’t true, of course, but just part of the ambient lore of joke-telling, in which we suspend our disbelief in magic and so forth. By the way we could imagine a somewhat labored continuation in which the Irishman turned out to be “right” in his second wish after all, perhaps he’s planning to throw a big party and one glass won’t refill fast enough to satisfy all his thirsty guests (and it’s no use saving it up in advance– we all know how stale stout loses its taste). We tend not to think of such complications which is part of the explanation of why jokes work. Is that enough?

Dennett goes on to say:

"The fact is that any program that could actually hold up its end in the conversation depicted would have to be an extraordinary supple, sophisticated, and multilayered system, brimming with “world knowledge” and meta-knowledge and meta-meta-knowledge about its own responses, the likely responses of its interlocutor, and much, much more…. Maybe the billions of actions of all those highly structured parts produce genuine understanding in the system after all."


The joke isn't funny because the irishman is stupid. The joke is funny because the irishman holds Guinness as his highest value.

That puppet can't tell you how to behave in life. Unless it is embodied in a human body that is relevant to the speaker and has a life context that is similar (requiring to eat, sleep, ect). The physical actions of the mind required to have a conversation are only part of our greater identity and awareness of ourselves in the world.

The puppet's existence in the world would be meaningless, though very intelligent. Why is there a puppet that can talk? What is it's purpose? Only a conscious being can answer that. The irishman's joke is funny because he chose Guinness as his reason to live, his highest value, his god, his philosophy.

What is the reason for a robot to be?


I don't actually personally know any conscious beings that can tell me what their purpose is. Vague kindof life aims, maybe, but only in some cases.


We don't have a shared existential story in the west anymore. We used to, but now it's 'fun' and that's about it.


What is the purpose of a computer program. To solve a problem so people can live better lives?


Love that story, thanks! Ok, now a serious response. You come administer a Turing Test (in sign language) to a puppet which I'm controlling with some strings which you can't see. Using puppetry, I help the puppet pass the test. Is the puppet conscious?


Is your hand conscious? Are you your hand? Am I reasonable to assume that your hand typed your comment? So how can I be sure that you are conscious and not just your hands and mouth?

I can, because those are merely the mechanisms you use to communicate. If you choose to communicate via puppet, that's still you communicating, and the puppet is not conscious. Now, if you can make a puppet that passes the test, and I mean really passes, like the example above, where there can be questions and meta questions, and no running from some topics, without you having to interfere at all during the tests, then you might have a conscious puppet after all.


Of course not. So what?


Above you wrote:

>The mistake is to think that something/someone could very convincingly _seem_ to be conscious but somehow not actually be conscious.

The puppet very convincingly seems to be conscious, but you yourself admit the puppet is not conscious.


But the puppet is being controlled by a conscious actor, so his assessment would still essentially be correct.


Ok whatever. This is kind of a tangent.

REPLACE('convincingly','convincingly, after reasonable efforts have been taken to eliminate deception as an explanation')


Well, maybe it would produce a hurricane on the other side of the globe.

If I told a time traveling Newton how I’m replying to this on my phone, he might conclude the phone is conscious.

I think the only thing we discover with this line of reasoning is that humans tend to ascribe consciousness to complex behavior.

This could be because complex behavior implies consciousness, but that’s generous. Humans have probably just been programmed to ascribe because it’s a good heuristic esp. in the Paleolithic world.


Try and imagine a system that could engage in a conversation like that. Like, really imagine it.


What good is my imagination as a tool for measuring consciousness?

Years ago I’d have a hard time imagining RESnet.

I see what you’re going for, if I really try to imagine it my brain starts ascribing consciousness to it. But that’s my point. I ascribe it due to intutition, not reason. Nothing about my intuition gives me a solid argument for why your bot must be conscious, it only evokes an intuitive feel that it would be.


What I'm going for ultimately here is the idea that consciousness is an emergent property when a system is complex enough and has meta-knowledge about itself and so on. I mean, I get its a stretch. But duality and panpsychism are also a stretch.

Ian M Banks said it well:

... Certainly there are arguments against the possibility of Artificial Intelligence, but they tend to boil down to one of three assertions: one, that there is some vital field or other presently intangible influence exclusive to biological life - perhaps even carbon-based biological life - which may eventually fall within the remit of scientific understanding but which cannot be emulated in any other form (all of which is neither impossible nor likely); two, that self-awareness resides in a supernatural soul - presumably linked to a broad-based occult system involving gods or a god, reincarnation or whatever - and which one assumes can never be understood scientifically (equally improbable, though I do write as an atheist); and, three, that matter cannot become self-aware (or more precisely that it cannot support any informational formulation which might be said to be self-aware or taken together with its material substrate exhibit the signs of self-awareness). ...I leave all the more than nominally self-aware readers to spot the logical problem with that argument.

http://www.vavatch.co.uk/books/banks/cultnote.htm


Why would you choose to believe in a stretch?

I like to think there are a few ways it could be, and try to be comfortable with not knowing which one it is e g -emergent property -solipsism -panpsychism


Great!

Its a huge mystery thats sitting there right infront of and behind our eyes every minute of the day. I'm comfortable with not knowing the answer but I'm also fascinated by it all and I like to debate. Particularly as a displacement activity when I really need to be doing something else.


What is consciousness?


It's another word for awareness. It depends on information and a drive to map information and navigate it.

I think that there is no hard divide between conscious and unconscious, more like a continuum. All sentient beings are conscious, but maybe not about themselves if the information about themselves doesn't get fed back to them in some way. But they're certainly conscious about their environment to be able to find food.

Consciousness is essential for actions within an environment. Self consciousness isn't essential for learning though, you can learn by doing, like animals. But it's essential for betterment, expanding your options and not relying on the first best thing you've found that worked by doing.


The subjective experience of being.


And if you met me, how would you judge that I'm experiencing what it's like to be me? Is there something like a Turing test for that?


If you believe philosophical zombies are impossible, as many philosophers do, then the Turing test is the consciousness test.


I sort of agree, but human willingness to attribute agency to physical objects (the willingness of the wind to blow, the rains to come, the sun to rise etc.) makes me doubt even quite strong versions of the turing test persay. I'd belive that a robot that managed to live a life in society and could make me feel that it was human like in a conversation was conscious; I think I would want it to have rights and protections.


You've got to think of what a Turing Test conversation would actually look like. See my other comment: https://news.ycombinator.com/item?id=20518623


The experience of color, sound, taste, smell, feels in perception, memory, imagination, dreams and other states of mind.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: