ChatGPT is general AI. It perform actions in a class requiring abstract thought which previously only humans were capable of. Sure the applications we see it is capable of are limited now, but that’s a consequence only of it’s operating environment. Using traditional AI techniques like tree search and recursive use of subproblems, which ChatGPT itself could design, it is not obvious to me that any problem is outside of it’s capability in solving.
As to why that is dangerous, there are many reasons.
1. It devalues human life for those in power. Technology has strictly increased wealth inequality over the last century and this takes it to the nth degree.
2. Even in its current form, it is having society destabilizing effects, go on reddit and see posts from high school students asking what they should even study when its clear ChatGPT will be able to do (research, programming, math) better than a degree will prepare them to.
3. Google the paperclip problem.
4. The amount of computing resources it takes to run ChatGPT is shockingly / absurdly low, we are far far from the hardware scaling limits of AI so it is obvious that it will continue to improve, even without further algorithmic breakthroughs.
> ChatGPT is general AI. It perform actions in a class requiring abstract thought which previously only humans were capable of.
Fundamentally disagree with you there. This is a natural language model, it is certainly not an AGI. That's why it gets things wrong so often. When humans converse with each other, there is a pattern to it, and this AI is simply very good at mimicking that pattern.
To our ape brains who have only ever known how to judge sentience by how well something communicates, it presents as very life-like. And there are phenomena happening in that network that might even be considered "thought". But it's not an AGI, just a building block toward one.
Yes it is obviously not an AGI in the sense of an intelligent, persistent agent, but it is also obviously a huge step towards one. It's like a single pass of thought on a topic, combined with self iteration and recursion in answer generation it would not be surprising to me if answers become an order of magnitude better. And we haven't even hit hardware limits.
I believe an order of magnitude stronger ChatGPT is an unacceptable risk to us all, it will let those who own and control it wield power which our government shows no sign of being able to regulate. We don't allow private research and ownership of nukes...
As to why that is dangerous, there are many reasons.
1. It devalues human life for those in power. Technology has strictly increased wealth inequality over the last century and this takes it to the nth degree. 2. Even in its current form, it is having society destabilizing effects, go on reddit and see posts from high school students asking what they should even study when its clear ChatGPT will be able to do (research, programming, math) better than a degree will prepare them to. 3. Google the paperclip problem. 4. The amount of computing resources it takes to run ChatGPT is shockingly / absurdly low, we are far far from the hardware scaling limits of AI so it is obvious that it will continue to improve, even without further algorithmic breakthroughs.