Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The "as far as we know" in premise one is hand-waving the critical point. We don't know how or if conscious arises from purely physical systems. It may be physical, but inadequately replicated by computer systems (it may be non-computable). It may not be physical in the way we use that concept. We just don't know. And the fact that we don't know puts the conclusion in doubt.

Even if consciousness arises from a physical system, it doesn't follow that computation on a physical substrate can produce consciousness, even if it mimics some of the qualities we associate with consciousness.




Rather, the fact we don't know means...

> you can't trivially rule out LLMs being conscious.

Which is the actual conclusion. That's all.

The one trying to read more into it by claiming a stronger conclusion than warranted (claiming this is all "doubtful") is you.

> Even if consciousness arises from a physical system, it doesn't follow that computation on a physical substrate can produce consciousness

This is incoherent.


The actual conclusion: "We've established that computers can be conscious, so LLMs might be"

The foregoing argument does not establish that computers can be conscious. It assumes consciousness arises from purely physical interactions AND that any physical process can be computed AND that computation can produce consciousness. It provides no reason to believe any of those things, and therefore no reason to believe computers or LLMs can be conscious.

Once you remove the unjustifed assumptions, it boils down to this: LLMs can do something that looks a bit like what some conscious beings do so they might be conscious.

The final paragraph is not incoherent.


> It assumes consciousness arises from purely physical interactions AND that any physical process can be computed AND that computation can produce consciousness

Yes, and with these assumptions, then "We've established that computers can be conscious, so LLMs might be".

Again that is all that is being claimed. The one insisting the argument is anything more persuasive than that is you. You seem hung up on this claim, but need I remind you it's only an existence proof, whether it's actually possible/probable/likely is an entirely different claim, and is not what is being claimed.

> Even if consciousness arises from a physical system, it doesn't follow that computation on a physical substrate can produce consciousness

> If consciousness arises from a physical system, a physical system cannot produce consciousness

> If A not A

This argument is a contradiction, it's incoherent.

If you're trying to draw a distinction between physical systems and computation on physical systems, well that's a moot point as every tested theory in physics uses mathematics that is computable.

Perhaps reality is not computable. If so then yes, that would mean computation alone will not produce the same kind of consciousness we have. That leaves open other kinds of consciousness, but that's another discussion.


If all that’s been shown is that LLM consciousness is not impossible then the point barely seems worth making. One tends to assume that interlocutors are trying to do more than point out the bleeding obvious.


Yet many argue LLM consciousness is impossible.

I'm not sure you're following what's being discussed in this thread.


I think perhaps the confusion is in the other direction, but I’ve learned not to hold unmerited high-handedness against autists.


No he's right. That was the point I'm trying to make. And I agree it's barely worth saying but so many people assume the opposite - that LLMs can't possibly be conscious that we have to state the obvious.


What we have is:

For all consciousnesses that exist, they arose from a (complex) physical system.

What you are positing is, effectively:

Any physical system (of sufficient complexity) could potentially become conscious.

This is a classic logical error. It's basically the same error as "Socrates is a man. All men are mortal. Therefore, all men are Socrates."


It's actually not. The "sufficient complexity" part is doing all the heavy lifting. If a proposed physical system is not conscious, well clearly it's not sufficiently complex.

Am I saying LLMs are sufficiently complex? No.

But can they be in principle? I mean, maybe? We'll have to see right?

Saying something is possible is definitely an easier epistemic position to hold than saying something is impossible, I'll tell you that.


No I'm not saying that. I'm saying that we can't trivially rule out the possibility that any sufficiently complex physical system is conscious.

You might think this is obvious and I agree. Yet there are many people that argue that computers can never be conscious.


I agree, we don't know if the processes necessary to produce consciousness are computable. They could be, but we don't know.

> even if it mimics

Crucially even if it appears indistinguishable. It's obvious where that could lead, isn't it?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: