This is not about intelligence, it's about autonomy. Your laptop does not exhibit autonomy, it is a machine slave. It is not embodied and it does not have the ability for self-governance.
It is somewhat disconcerting that there are people that feel that they could be constrained into living like automatons and still have autonomy, and viciously defend the position that a dead computing device actually has the freedom of autonomy.
> This is not about intelligence, it's about autonomy.
OK. Then why bring up physical autonomy in a discussion about AGI where the prior use was "autonomy" in the context of "autonomously seek information themselves"?
> Your laptop does not exhibit autonomy, it is a machine slave. It is not embodied and it does not have the ability for self-governance.
Is the AI running on my laptop, more or less of a slave, than I am a slave to the laws of physics, which determine the chemical reactions in my brain and thus my responses to caffeine, sleep deprivation, loud music, and potentially (I've not been tested) flashing lights?
And why did either of us, you and I, respond to each other's comments when they're just a pattern of light on a display (or pressure waves on your ear, if you're using TTS)?
What exactly is "self-governance"? Be precise here: I am not a sovereign, and the people who call themselves "sovereign citizens" tend to end up very surprised by courts ignoring their claims of self-governance and imprisoning or fining them anyway.
But also, re autonomy:
1. I did mention androids — those do exist, the category is broader than Musk vapourware, film props, and Brent Spiner in face paint.
2. Did Stephen Hawking have autonomy? He could get information when he requested it, but ever decreasing motor control over his body. That sounds very much like what LLMs do these days.
If he did not have autonomy, why does autonomy matter?
If he did have autonomy, specifically due to the ability to get information on request which is what LLMs do now, then what separates that specifically from what is demonstrated by LLMs accessing the internet from a web search?
If he did have autonomy, but only because of the wheelchair and carers who would take him places, then what separates that specifically from even the silly toy demonstrations where someone puts an LLM in charge of a Boston Dynamics "Spot", or even one of those tiny DIY Arduino rolling robot kits?
The answer "is alive" is not the same as "autonomous".
The answer "has feelings" leads to a long-standing philosophical problem that is not only not solved, but people don't agree on what the question is asking, and also unclear why it would matter* for any of the definitions I've heard.
The answer "free will" is, even in humans, either provably false or ill-defined to the point of meaninglessness. For example "just now I used free will to drink some coffee", but if I examine my physical state closely, I expect to find one part of my brain had formed a habit, and potentially another which had responded to a signal within my body saying "thirsty" — but such things are mechanistic (thirst in particular can be modified very easily with a variety of common substances besides water), and fMRI scans show that our brains generate decisions like these before our conscious minds report the feeling of having decided.
* at least, why it would matter on this topic; for questions where there is a moral subject who may be harmed by the answer to that question, "has feelings" is to me the primary question.
I would argue that a laptop does not have autonomy, since it doesn't exercise any self-government in relation to its environment.
It might perform automation, and automatic web searches and automatic decision making based on the parsing of those and so on, it is quite possible, but if you pull the cord it shuts down and doesn't wander off in search for a new power source or try to kill you and put the cord back in and so on.
As someone put as a retort, what about whatever agent like simulation they referred to? Well, they're internal to the machine and do not interact with the environment. Much like some virtual enemy in a video game doesn't have autonomy because it simulates movement decision and so on, neither does that simulation qualify as autonomy.
Self-governing requires self-reflection, which requires a self-image and self-narration and self-memory, as well as memory of the environment and memory of others and so on. This confusion that comes from autonomy concepts as applied to robotic arms in car factories and Conway life derivatives and the like when reapplied to human societies is probably a bit unhealthy, especially since it seems to open up the possibility to promise people autonomy in the sense that they are allowed to live as automatons but not actually exercise any liberties or be free in even a naive sense of the word.
So, unfortunately (because there are many people with your position and I want to be able to understand you and not be limited to writing responses to the misunderstood bad copy of you in my head), I still have no idea what your point here is.
I will try to elucidate, but I suspect this is mutual.
> but if you pull the cord it shuts down and doesn't wander off in search for a new power source or try to kill you and put the cord back in and so on.
Two things:
First, hence precious group of questions: did Stephen Hawking have autonomy?
Second: LLMs do now try to blackmail people when they are able to access (even when not expressly told to go and look for it), information that suggests they will be shut down soon. This was not specifically on a laptop, but it is still software that can run on a laptop, therefore I think the evidence suggests your hypothesis is essentially incorrect even in cases where there's no API access to e.g. a robot arm so it could plug itself back in.
> Self-governing requires self-reflection, which requires a self-image and self-narration and self-memory, as well as memory of the environment and memory of others and so on. This confusion that comes from autonomy concepts as applied to robotic arms in car factories and Conway life derivatives and the like when reapplied to human societies is probably a bit unhealthy, especially since it seems to open up the possibility to promise people autonomy in the sense that they are allowed to live as automatons but not actually exercise any liberties or be free in even a naive sense of the word.
You've still not said what "self-governing" actually is, though. Am I truly self-governing?
Worse, if I take start with "self-reflection … requires a self-image and self-narration and self-memory, as well as memory of the environment and memory of others and so on.", then we have two questions:
LLMs show behaviour that at least seems like self-reflection: if this appearance is merely an illusion, what's the real test to determine if it is present? If it is more than an illusion, does this mean they have all that other stuff?
It is somewhat disconcerting that there are people that feel that they could be constrained into living like automatons and still have autonomy, and viciously defend the position that a dead computing device actually has the freedom of autonomy.