GPT4 is scary, it portends a future that many of us thought would never happen a year or two ago. We all think about the potential risks, about the kind of power this kind of mechanical intelligence can grant evil individuals or just the everyday people that will be "left behind".
But even so I think we can't afford NOT to have this kind of development if we want to survive as a species. If we want our lives to improve, our loved ones to live longer healthier lives. The amount of good that can come from this technology is immeasurable. The only thing we as humanity need to do is reign in our ego just like chess and go players had to do when their crafts were rendered supposedly irrelevant in the face of an insurmountable intelligence.
And I think the risks from AI are overblown, is AI really more dangerous than the invention of gunpowder, electricity, and of course the nuclear bomb? I don't think so.
What's scary is that we live in the transition period and it could be messy. What comes after the "transition period" of having a world without AI to having a world with AI? I'm not going to try to predict that but I think there are as many if not many more positive outcomes than there are negative outcomes.
> And I think the risks from AI are overblown, is AI really more dangerous than the invention of gunpowder, electricity, and of course the nuclear bomb? I don't think so.
It might just be. Those inventions did destabilize society quite significantly. On the nuclear bomb I'm not sure we've seen how that one plays out quite yet. Mutually assured destruction kinda stalling that one. Maybe AI can be more selective as a weapon and easier to employ? In the next war we could be living the next AI sci-fi movie.
Unintended consequences are hard to predict in advance. Who would of ever thought the first professions to be at risk of software automators were going to be the software professionals themselves 10 years back? People were predicting the end of menial blue collar jobs and replacement with robots and automation. "Jobs of the future" were white collar, at least that was the narrative a decade or so ago. Now more than one comment on this forum (and I believe given human nature and the nature of power they are correct) thinking they need to change into blue collar jobs or become teachers, etc.
Sadly my personal opinion as I've gotten older is that technologists (I was one) are often the most idealistic naive of them all. The trades people I know when I talk to them laugh at ChatGPT - serves them right is the general reaction. Its that quality that often leads them to deny what an average human with power/wealth will do with AI.
The comparisons with Chess are frivolous. Chess players don't depend on fighting stockfish/Alphazero for their livelihood. Chess players had to do nothing when Stockfish/Alphazero got better than them (except incorporating them during practice maybe). People still want to only watch humans playing chess. There is no market for AI v AI match.
The Chess players can not be replaced by AI regardless of how well the AI plays. So the comparison is meaningless.
It's not meaningless when the primary argument against AI is that people derive "purpose" from their work. If AI gets us to a zero scarcity society the entire idea of working for a "livelihood" is "frivolous" as you like to put it.
The comparison to chess is still meaningless, even if that was the only argument that "people derive "purpose" from their work". The work of Chess players never was competing against an AI. From the point of view of Chess players not much changed. No tournament allows Stockfish/AlphaZero to compete. The AI is not allowed to do their work in any capacity.
And this is all not even mentioning that the number of chess professionals who play chess it so small compared to things that can impact humanity, that it is irrelevant whatever happens in the chess world. So saying that chess was fine even though AI can beat players, is a statement which is true yet has no relation with tools which might impact all of humanity and will have real impact.
But even so I think we can't afford NOT to have this kind of development if we want to survive as a species. If we want our lives to improve, our loved ones to live longer healthier lives. The amount of good that can come from this technology is immeasurable. The only thing we as humanity need to do is reign in our ego just like chess and go players had to do when their crafts were rendered supposedly irrelevant in the face of an insurmountable intelligence.
And I think the risks from AI are overblown, is AI really more dangerous than the invention of gunpowder, electricity, and of course the nuclear bomb? I don't think so.
What's scary is that we live in the transition period and it could be messy. What comes after the "transition period" of having a world without AI to having a world with AI? I'm not going to try to predict that but I think there are as many if not many more positive outcomes than there are negative outcomes.