Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my understanding of the Chinese Room example, the resolution to the argument is that the *human* may not understand Chinese, but the *system as a whole* can be said to understand it.

With this in mind, I think asking whether ChatGPT *in and of itself* is "conscious" or has "agency" is sort of like asking if the speech center of a particular human's brain is "conscious" or has "agency": it's not really a question that makes sense, because the speech center of a brain is just one part of a densely interconnected system that we only interpret as a "mind" when considered in its totality.



Good point, that very much vibes with my thoughts on this matter. Lately, I've been contemplating the analogy between the role LLMs might take within society with that of the brain's language center* in human behavior. There's definitely a way in which we resemble these models. More than some might like to admit. The cleverness, but also the hallucinating, gaslighting and other such behaviors.

And on the other hand, any way you'd slice it, it seems to me LLMs - and software systems in general - necessarily lack intrinsic motivation. By definition, any goal it has can only be the goal of whoever designed that system. Even if its maker decides - "let it pick goals randomly", those randomly picked goals are just intermediate steps toward the enacting of the programmer's original goal. Robert Miles' YouTube videos on alignment shed light on these issues also. For example: https://www.youtube.com/watch?v=hEUO6pjwFOo

Another relevant source on these issues is the book "The Master and his Emissary", which discusses how basically the language center can, in some way - I'm simplifying a lot, fall prey to the illusion that "it" is the entirety of human consciousness.

* or at least some subsystems of that language center, it's important to remember how little we still understand of human cognition


What goals do we have that aren't essentially all boiled down to whatever evolution, genetics, and our environment have sorted of molded into us?


If you subscribe to a purely mechanistic world-view, i.e. computationalism, then yes. But that's a leap of faith I cannot justify taking. It's a matter of faith, because though we cannot exclude the possibility logically, it also doesn't follow necessarily from our experience of life, at least as far as I can see. Yes, so many times throughout the ages, scientists have discovered mechanisms to explain things which we've historically been convinced will always be outside the purview of science.

But that doesn't mean everything will one day be explained. And one thing that remains unexplained is our consciousness. The problem of qualia. Free will. The problem of suffering. We just don't understand those. Maybe they are simply epiphenomena, maybe they are false problems. But when it comes to software systems, we know with certainty that they don't have free will, don't experience qualia, pain or hope or I-ness.

Sure, it's a difference that disappears if one takes that leap of faith into computationalism. Then, to maintain integrity, one would have to show the same deference to these models as one shows to their fellow human. One would have to think hard about not over-working these already enslaved fellow beings. One would have to consider fighting for the rights of these models.


> Then, to maintain integrity, one would have to show the same deference to these models as one shows to their fellow human.

Except they’re not even remotely close to anything like human intelligence. As I wrote in another comment they are very capable systems, to the point where in some ways they show some level of elementary understanding, but in many forms of reasoning they are utterly and completely incapable. Assigning human equivalent cognitive status is patently absurd. And yes I am a physicalist and I see no reason why a computer system could not achieve human equivalent cognitive ability. These just aren’t that. They may be an important step towards it though.


They might well be in a couple of years, once they become deeply integrated with symbolic techniques. It's already happening with plugins, chain-of-thought reasoning, self-reflection etc. Soon the illusion will be very convincing and hard to shake off. Yet to me, nothing essential will have changed, and the idea of treating these systems as our equals will remain just as patently absurd as before. I expect this will the physicalist position a much more fraught decision, because it will impose some hard limits on their interaction and use of these technologies on those who subscribe to it.


I don't think that's the correct take for the room. Say the human speaks english. If you asked them what the conversation was about, and they had the full resources of the room at their disposal could they tell you? No, because the room doesn't actually allow them to understand chinese, it's just a symbol lookup table. The lookup table doesn't mean the system understands chinese, just the relationship between symbols that can lead to a coherent output.


What if the human learns all the rules? Then the system as a whole is the human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: