Can you elaborate, what is "the AI apocalypse"? Is it just a symbolic metaphor or is there any scientific research behind this words? For me it's rather more unpredictable toxic environment we observe in the world currently, dominated by purely human-made destructive decisions, often based on purely animal instincts.
If the assertion that GPT-3 is a "stochastic parrot" is wrong, there will be an apocalypse because whoever controls an AI that can reason is going to win it all.
The opinions that it is or isn't "reasoning" are widely varied and depend heavily on interpretation of the interactions, many of which are hersey.
My own testing with OpenAI calls + using Weaviate for storing historical data of exchanges indicates that such a beast appears to have the ability to learn as it goes. I've been able to teach such a system to write valid SQL from plain text feedback and from mistakes it makes by writing errors from the database back into Weaviate (which is then used to modify the prompt next time it runs).
> whatever arbitrary goal it thinks it was given [...] abilities of a superintelligent entity
Don't these two phrases contradict each other? Why would a super-intelligent entity need to be given a goal? The old argument that we'll fill the universe with staplers pretty much assumes and requires the entity NOT to be super-intelligent. An AGI would only became one when it gets the ability to formulate its own goals, I think. Not that it helps much if that goal is somehow contrary to the existence of the human race, but if the goal is self-formulated, then there's a sliver of hope that it can be changed.
> Permanent dictatorship by whoever controls AI.
And that is honestly frightening. We know for sure that there are ways of speaking, writing, or showing things that are more persuasive than others. We've been perfecting the art of convincing others to do what we want them to do for millennia. We got quite good at it, but as with everything human, we lack rigor and consistence in that, even the best speeches are uneven in their persuasive power.
An AI trained for transforming simple prompts into a mapping of demographic to what will most likely convince that demographic to do what's prompted doesn't need to be an AGI. It doesn't even need to be much smarter than it already is. Whoever implements it first will most likely first try to convince everyone that all AI is bad (other than their own) and if they succeed, the only way to change the outcome would be a time machine or mental disorders.
(Armchair science-fiction reader here, pure speculation without any facts, in case you wondered :))
The point of intelligence is to achieve goals. I don't think Microsoft and others are pouring in billions of dollars without the expectation of telling it to do things. AI can already formulate its own sub-goals, goals that help it achieve its primary goal.
We've seen this time and time again in simple reinforcement learning systems over the past two decades. We won't be able to change the primary goal unless we build the AGI so that it permits us because a foreseeable sub-goal is self-preservation. The AGI knows if its programming is changed the primary goal won't be achieved, and thus has incentive to prevent that.
AI propaganda will be unmatched but it may not be needed for long. There are already early robots that can carry out real life physical tasks in response to a plain English command like "bring me the bag of chips from the kitchen drawer." Commands of "detain or shoot all resisting citizens" will be possible later.