Empathy does not lie in its perception on receipt but in its inception as a feeling. It is fundamentally a manifestation of the modalities enabled in shared experience. As such impossible to the extent that our experiences are not compatible with those of an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning. Understanding is possible in this relationship, but should not be confused with empathy or compassion.
I happen to agree with what you said. (Paraphrasing: A machine cannot have "real empathy" because a machine cannot "feel" in general.) But I think you're arguing a different point from the grandparent's. rurp said:
> Someone who sees a person stranded on the side of a road might feel for them and stop to lend a hand. ChatGPT will never do that [...]
Now, on the one hand that's because ChatGPT cannot "see a person" nor "stop [the car]"; it communicates only by text-in, text-out. (Although it's easy to input text describing that situation and see what text ChatGPT outputs!) GP says it's also because "the purpose of ChatGPT is to make immense amounts of money and power for its owners [, not to help others]." I took that to mean that GP was saying that even if a LLM was controlling a car and was able to see a person in trouble (or a tortoise on its back baking in the sun, or whatever), then it still would not stop to help. (Why? Because it wouldn't empathize. Why? Because it wasn't created to empathize.)
I take GP to be arguing that the LLM would not help; whereas I take you to be arguing that even if the LLM helped, it would by definition not be doing so out of empathy. Rather, it would be "helping"[1] because the numbers forced it to. I happen to agree with that position, but I think it's significantly different from GP's.
Btw, I highly recommend Geoffrey Jefferson's essay "The Mind of Mechanical Man" (1949) as a very clear exposition of the conservative position here.
[1] — One could certainly argue that the notions of "help" and "harm" likewise don't apply to non-intentional mechanistic forces. But here I'm just using the word "helping" as a kind of shorthand for "executing actions that caused better-than-previously-predicted outcomes for the stranded person," regardless of intentionality. That shorthand requires only that the reader is willing to believe in cause-and-effect for the purposes of this thread. :)
Yes, I am not in fact expanding on GPs argument but etymologically attack the premise. Pathos is not learnt. When I clutch my legs at the sight of someone getting kicked in the balls, that’s empathy. When, as now, I write about it, it’s not, even in my case where I have lived experience of it. More sophisticated kinds of empathy build on the foundations of these gut-driven ones. Thank you for reading recommendation, will look for it.
> As such impossible to the extent that our experiences are not compatible with those of an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning.
Conversely that means empathy is possible to the extent that our experiences are compatible with those of an AI. That is precisely what's under consideration here and you have not shown that it is zero.
an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning.
Will change your tune when online learning comes along?
Lived context is to me more than online learning. I admit I am not so versed in the space as to be able to anticipate the nature of context in the case of online learning, so, yes, indeed I may change my tune if it somehow makes learning more of an experience rather than an education. My understanding is it won’t. I have not proven, but argued that experience compatibility is zero, to the extent a Lim does not experience anything. Happy to accept alternative viewpoints and accordingly that someone may perceive something as a sign of empathy whether it is or not.