> Can we please stop with this "not aligned with human interests" stuff? It's a computer that's mimicking what it's read. That's it. That's like saying a stapler "isn't aligned with human interests."
No, I don't think we can. The fact that there's no intent involved with the AI itself isn't the issue: humans created this thing, and it behaves in ways that are detrimental to us. I think it's perfectly fine to describe this as "not aligned with human interests".
You can of course hurt yourself with a stapler, but you actually have to make some effort to do so, and in which case it's not the stapler than isn't aligned with your interests, but you.
This is quite different from an AI whose poorly understood and incredibly complex statistical model might - were it able to interact more directly with the outside world - cause it to call the police on you and, given its tendency to make things up, possibly for a crime you didn't actually commit.
I think a better way to think about this might not be that this chatbot isn't dangerous, but the fact that this was developed under capitalism, an an organization that's ultimate goal is profitability, means that the incentives of the folks who built it (hella $) are baked into the underlying model, and there's a glut of evidence that profit-aligned entities (like businesses) are not necessarily (nor, I would argue, /can they be/) human-aligned.
This is the same as the facial-recognition models that mis-identify folks of color more frequently than white folks or the prediction model that recommended longer jail/prison sentences for black folks than for white folks who committed the same crime.
> but the fact that this was developed under capitalism
I think you're ascribing something to a particular ideology that's actually much more aligned with the fundamentals of the human condition.
We've tried various political and economic systems and managed to corrupt all of them. Living under the communist governments behind the iron curtain was no picnic, and we didn't need AI to build deeply sinister and oppressive systems that weren't aligned with human interest (e.g., the Stasi). Profit, in the capitalist sense, didn't come into it.
The only way to avoid such problems completely is to not be human, or to be better than human.
I'm not saying its the perfect form of government (and I'm not even American), but the separation of power into executive, legislative, and judicial in the US was motivated by a recognition that humans are human and that concentration of too much power in one place is dangerous.
I do think, therefore, that we perhaps need to find ways to limit the power wielded by (particularly) large corporations. What I unfortunately don't have is any great suggestions about how to do that. In theory laws that prevent monopolies and anticompetitive behaviour should help here but they're evidently not working well enough.
No, I don't think we can. The fact that there's no intent involved with the AI itself isn't the issue: humans created this thing, and it behaves in ways that are detrimental to us. I think it's perfectly fine to describe this as "not aligned with human interests".
You can of course hurt yourself with a stapler, but you actually have to make some effort to do so, and in which case it's not the stapler than isn't aligned with your interests, but you.
This is quite different from an AI whose poorly understood and incredibly complex statistical model might - were it able to interact more directly with the outside world - cause it to call the police on you and, given its tendency to make things up, possibly for a crime you didn't actually commit.