The "as far as we know" in premise one is hand-waving the critical point. We don't know how or if conscious arises from purely physical systems. It may be physical, but inadequately replicated by computer systems (it may be non-computable). It may not be physical in the way we use that concept. We just don't know. And the fact that we don't know puts the conclusion in doubt.
Even if consciousness arises from a physical system, it doesn't follow that computation on a physical substrate can produce consciousness, even if it mimics some of the qualities we associate with consciousness.
The actual conclusion: "We've established that computers can be conscious, so LLMs might be"
The foregoing argument does not establish that computers can be conscious. It assumes consciousness arises from purely physical interactions AND that any physical process can be computed AND that computation can produce consciousness. It provides no reason to believe any of those things, and therefore no reason to believe computers or LLMs can be conscious.
Once you remove the unjustifed assumptions, it boils down to this: LLMs can do something that looks a bit like what some conscious beings do so they might be conscious.
> It assumes consciousness arises from purely physical interactions AND that any physical process can be computed AND that computation can produce consciousness
Yes, and with these assumptions, then "We've established that computers can be conscious, so LLMs might be".
Again that is all that is being claimed. The one insisting the argument is anything more persuasive than that is you. You seem hung up on this claim, but need I remind you it's only an existence proof, whether it's actually possible/probable/likely is an entirely different claim, and is not what is being claimed.
> Even if consciousness arises from a physical system, it doesn't follow that computation on a physical substrate can produce consciousness
> If consciousness arises from a physical system, a physical system cannot produce consciousness
> If A not A
This argument is a contradiction, it's incoherent.
If you're trying to draw a distinction between physical systems and computation on physical systems, well that's a moot point as every tested theory in physics uses mathematics that is computable.
Perhaps reality is not computable. If so then yes, that would mean computation alone will not produce the same kind of consciousness we have. That leaves open other kinds of consciousness, but that's another discussion.
If all that’s been shown is that LLM consciousness is not impossible then the point barely seems worth making. One tends to assume that interlocutors are trying to do more than point out the bleeding obvious.
No he's right. That was the point I'm trying to make. And I agree it's barely worth saying but so many people assume the opposite - that LLMs can't possibly be conscious that we have to state the obvious.
It's actually not. The "sufficient complexity" part is doing all the heavy lifting. If a proposed physical system is not conscious, well clearly it's not sufficiently complex.
Am I saying LLMs are sufficiently complex? No.
But can they be in principle? I mean, maybe? We'll have to see right?
Saying something is possible is definitely an easier epistemic position to hold than saying something is impossible, I'll tell you that.
Even if consciousness arises from a physical system, it doesn't follow that computation on a physical substrate can produce consciousness, even if it mimics some of the qualities we associate with consciousness.