But now you have appealed to anthropomorphism (“intelligence”) to pose a problem yet forbidden anthropomorphism in an attempted counter argument. That doesn’t seem quite fair.
I don't intend to forbid anything - I just think the language of motivation and desire makes it harder to see the risks, because it introduces irrelevant questions into the the conversation like "how can machines want something?"
Conversely, at least in this discussion, the term "intelligence" seems pretty neutral.
> I just think the language of motivation and desire makes it harder to see the risks, because it introduces irrelevant questions into the the conversation like "how can machines want something?"
Yet discourse on existential AI risks is predicated on something like a "goal" (e.g. to maximise paperclips). Notions like "goal" also make it harder to see clearly what we are actually discussing.
> the term "intelligence" seems pretty neutral
Hmm, I'm not convinced. It seems like an extremely loaded term to me.
AIs absolutely do have goals, determined by their reward functions.
Yes, "intelligence" is a deeply loaded term. It just doesn't matter in the context of the discussion here, so far as I've seen.its ambiguities haven't been relevant.
> AIs absolutely do have goals, determined by their reward functions.
You're confusing "AIs" (existing ML models) with "AGIs" (theoretical things that can do anything and are apparently going to take over the world). Not only is there not proof AGIs can exist, there isn't proof they can be made with fixed reward functions. That would seem to make them less than "general".