I had an experience the other day where claude code wrote a script that shelled out to other LLM providers to obtain some information (unprompted by me). More often it requests information from me directly. My point is that the environment itself for these things is becoming at least as computationally complex or irreducible (as the OP would say) as the model's algorithm, so there's no point trying to analyse these things in isolation.
(I've a recently acquired strong opinion on MCP, beware)
MCP seems to be the idea to expose literally any data source to AI agents as context. E.g, in work/business context: a database, source code, a kanban board; in private context: your photo library, notes, etc. This is good and makes sense.
As a technical "standard" or "protocol" (as it claims to be), it's a total mess, read more here: https://modelcontextprotocol.io. Though I guess they are inventing it as they go.
Nah, that's still considered an ephemeral user session. We're talking about what the service does with what it learns from the failures (and successes) marked by user feedback into the chat, or the app's buttons.
I suspect there's a harsher argument to be made regarding "autonomous". Pull the power cord and see if it does what a mammal would do, or if it rather resembles a chaotic water wheel.
"Food" is only analogous to "mains power" for devices which also have a battery.
But regarding hunger: while they are a weird and pathological example, breatharians are in fact mammals, and the result of the absence of food is sometimes "starves to death" and not always "changes mind about this whole breatharian thing" or "pathological dishonesty about calorie content of digestive biscuits dunked in tea".
Right, so you agree that there is a clear difference between a mammal and the device we're discussing.
I'm not sure why introducing a certain type of rare scam artist into the modeling of this thought experiment would make things clearer or more interesting.
> Right, so you agree that there is a clear difference between a mammal and the device we're discussing.
A difference that you have not demonstrated the relevance of.
If I run an AI on my laptop and unplug the charger, this runs until the battery dies. If I have a mammal that does not eat, it lives until it starves.
If I run an AI on a desktop and unplug the mains, it ceases function in milliseconds (or however long the biggest capacitor in the PSU lasts). If I (for the sake of argument) had a device that could instantly remove all the ATP from a mammal's body, they'd also be dead pretty quick.
If I have an android, purely electric motors and no hydraulics, and the battery connector comes loose, it ragdolls. Same for a human who has a heart attack.
An AI that is trained with rewards for collecting energy to recharge itself, does so. One that has no such feedback, doesn't. Most mammals have such a mechanism from evolution, but there are exceptions where that signal is missing (not just weird humans), and they starve.
None of these things say anything about intelligence.
> I'm not sure why introducing a certain type of rare scam artist into the modeling of this thought experiment would make things clearer or more interesting.
Because you're talking about the effect of mammals ceasing the consumption of food, and they're an example of mammals ceasing the consumption of food.
This is not about intelligence, it's about autonomy. Your laptop does not exhibit autonomy, it is a machine slave. It is not embodied and it does not have the ability for self-governance.
It is somewhat disconcerting that there are people that feel that they could be constrained into living like automatons and still have autonomy, and viciously defend the position that a dead computing device actually has the freedom of autonomy.
> This is not about intelligence, it's about autonomy.
OK. Then why bring up physical autonomy in a discussion about AGI where the prior use was "autonomy" in the context of "autonomously seek information themselves"?
> Your laptop does not exhibit autonomy, it is a machine slave. It is not embodied and it does not have the ability for self-governance.
Is the AI running on my laptop, more or less of a slave, than I am a slave to the laws of physics, which determine the chemical reactions in my brain and thus my responses to caffeine, sleep deprivation, loud music, and potentially (I've not been tested) flashing lights?
And why did either of us, you and I, respond to each other's comments when they're just a pattern of light on a display (or pressure waves on your ear, if you're using TTS)?
What exactly is "self-governance"? Be precise here: I am not a sovereign, and the people who call themselves "sovereign citizens" tend to end up very surprised by courts ignoring their claims of self-governance and imprisoning or fining them anyway.
But also, re autonomy:
1. I did mention androids — those do exist, the category is broader than Musk vapourware, film props, and Brent Spiner in face paint.
2. Did Stephen Hawking have autonomy? He could get information when he requested it, but ever decreasing motor control over his body. That sounds very much like what LLMs do these days.
If he did not have autonomy, why does autonomy matter?
If he did have autonomy, specifically due to the ability to get information on request which is what LLMs do now, then what separates that specifically from what is demonstrated by LLMs accessing the internet from a web search?
If he did have autonomy, but only because of the wheelchair and carers who would take him places, then what separates that specifically from even the silly toy demonstrations where someone puts an LLM in charge of a Boston Dynamics "Spot", or even one of those tiny DIY Arduino rolling robot kits?
The answer "is alive" is not the same as "autonomous".
The answer "has feelings" leads to a long-standing philosophical problem that is not only not solved, but people don't agree on what the question is asking, and also unclear why it would matter* for any of the definitions I've heard.
The answer "free will" is, even in humans, either provably false or ill-defined to the point of meaninglessness. For example "just now I used free will to drink some coffee", but if I examine my physical state closely, I expect to find one part of my brain had formed a habit, and potentially another which had responded to a signal within my body saying "thirsty" — but such things are mechanistic (thirst in particular can be modified very easily with a variety of common substances besides water), and fMRI scans show that our brains generate decisions like these before our conscious minds report the feeling of having decided.
* at least, why it would matter on this topic; for questions where there is a moral subject who may be harmed by the answer to that question, "has feelings" is to me the primary question.
I would argue that a laptop does not have autonomy, since it doesn't exercise any self-government in relation to its environment.
It might perform automation, and automatic web searches and automatic decision making based on the parsing of those and so on, it is quite possible, but if you pull the cord it shuts down and doesn't wander off in search for a new power source or try to kill you and put the cord back in and so on.
As someone put as a retort, what about whatever agent like simulation they referred to? Well, they're internal to the machine and do not interact with the environment. Much like some virtual enemy in a video game doesn't have autonomy because it simulates movement decision and so on, neither does that simulation qualify as autonomy.
Self-governing requires self-reflection, which requires a self-image and self-narration and self-memory, as well as memory of the environment and memory of others and so on. This confusion that comes from autonomy concepts as applied to robotic arms in car factories and Conway life derivatives and the like when reapplied to human societies is probably a bit unhealthy, especially since it seems to open up the possibility to promise people autonomy in the sense that they are allowed to live as automatons but not actually exercise any liberties or be free in even a naive sense of the word.
So, unfortunately (because there are many people with your position and I want to be able to understand you and not be limited to writing responses to the misunderstood bad copy of you in my head), I still have no idea what your point here is.
I will try to elucidate, but I suspect this is mutual.
> but if you pull the cord it shuts down and doesn't wander off in search for a new power source or try to kill you and put the cord back in and so on.
Two things:
First, hence precious group of questions: did Stephen Hawking have autonomy?
Second: LLMs do now try to blackmail people when they are able to access (even when not expressly told to go and look for it), information that suggests they will be shut down soon. This was not specifically on a laptop, but it is still software that can run on a laptop, therefore I think the evidence suggests your hypothesis is essentially incorrect even in cases where there's no API access to e.g. a robot arm so it could plug itself back in.
> Self-governing requires self-reflection, which requires a self-image and self-narration and self-memory, as well as memory of the environment and memory of others and so on. This confusion that comes from autonomy concepts as applied to robotic arms in car factories and Conway life derivatives and the like when reapplied to human societies is probably a bit unhealthy, especially since it seems to open up the possibility to promise people autonomy in the sense that they are allowed to live as automatons but not actually exercise any liberties or be free in even a naive sense of the word.
You've still not said what "self-governing" actually is, though. Am I truly self-governing?
Worse, if I take start with "self-reflection … requires a self-image and self-narration and self-memory, as well as memory of the environment and memory of others and so on.", then we have two questions:
LLMs show behaviour that at least seems like self-reflection: if this appearance is merely an illusion, what's the real test to determine if it is present? If it is more than an illusion, does this mean they have all that other stuff?
I still don't understand your point, sorry. If it's a semantic nitpick about the meaning of "autonomous", I'm not interested - I've made my definition quite clear, and it has nothing to do with when agents stop doing things or what happens when they get turned off.
You're the one using words incorrectly. Everybody else agrees on what these words mean and you're insisting on your own made-up definitions. And then you throw a fit like a child when someone disagrees.
Because that's what they're created to do. You can make a system which runs continuously. It's not a tech limitation, just how we preferred things to work so far.
You're making claims about those systems not being autonomous. When we want to, we create them to be autonomous. It's got nothing to do with agency or survival instincts. Experiments like that have been done for years now - for example https://techcrunch.com/2023/04/10/researchers-populated-a-ti...
Yes, because they aren't. Against your fantasy that some might be brought into existence sometime in the future I present my own fantasy that there won't be.
I linked you an experiment with multiple autonomous agents operating continuously. It's already happened. It's really not clear what you're disagreeing with here.
No, that was a simulation, akin to Conway's cellular automata. You seem to consider being fully under someone else's control to qualify as autonomy, at least in certain casees, which to me comes across as very bizarre.
You seem to be taking about some kind of free will and perfect independence, not autonomy as normally understood. Agents can have autonomy within the environment they have access to. We talk about autonomous vehicles for example, where we want them to still stay within some action boundaries. Otherwise we'd be discussing metaphysics. It's not like we can cross physical/body boundaries just because we've got autonomy.
> An autonomous robot is a robot that acts without recourse to human control. Historic examples include space probes. Modern examples include self-driving vacuums and cars.
The same idea is used for agents - they're autonomous because they independently choose actions with a specific or vague goal.
I don't see the relevance of things that carry their own power supply either, and I still disagree that Conway automata and similar software exhibit autonomy.
I did not mention "free will and perfect independence".
I could go into more details, but basically you tried to call out some weird use of "autonomous" when I'm using the meaning that's an industry standard. If you mean something else, you'll need to define it. Saying you can't be autonomous under someone's rules brings a serious number of issues to address, before you get to anything AI related.
Well, I disagree that computers exhibit intelligence and according to "industry standard" they do so in my view that does not carry any weight on its own.
Autonomy implies self-governance, not just any form of automaton.