Geoffrey Hinton, AI doomer Eliezer Yudkowsky and others like Gary Marcus preaching AI == nuclear weapons seem like they don't really understand how LLMs work under the hood. Or they flip flop between "look at how stupid AI is" and "OMG! AI gonna kill us all, they are too smart".
100% Generative AI for text, video, audio, images have legitimate concerns for deep-fakes, scams, hallucination. However extending it to say that it will destroy humanity in the next few years is pretty far fetched.
AGI in the hands of one or two powerful corporations / nation states is the biggest risk we face.
Competition and power balance is extremely important.
Yan LeCun is right, we need Good AIs to fight bad AIs. Good robots to fight bad robots. Good and Bad being relative to a group's beliefs.
We cannot trust nation states, or corporations, or billionaires to do the right thing. The right thing is relative. Humans are a distributed system optimizing for their own survival and good feelings.
Hinton, Bengio, Stuart Russell and other behemoths voice similar concerns over AI, with enough confidence to switch their careers (e.g. Bengio now doing safety research at Mila, Ilya Sutskever switching from capabilities to alignment at OpenAI, Hinton quitting job to focus on advocacy)
Their concern isn’t about today’s generative LLMs, it’s about tomorrow’s autonomous, goal driven AGI systems. And they clearly got presented with the arguments for the limitations of auto regressive models, but disagreed.
So, it seems a little much to place absolute confidence into it being a non-issue, because they just don’t understand something LeCun and others do. (with the same holding true the other way around)
100% Generative AI for text, video, audio, images have legitimate concerns for deep-fakes, scams, hallucination. However extending it to say that it will destroy humanity in the next few years is pretty far fetched.
AGI in the hands of one or two powerful corporations / nation states is the biggest risk we face.
Competition and power balance is extremely important.
Yan LeCun is right, we need Good AIs to fight bad AIs. Good robots to fight bad robots. Good and Bad being relative to a group's beliefs.
We cannot trust nation states, or corporations, or billionaires to do the right thing. The right thing is relative. Humans are a distributed system optimizing for their own survival and good feelings.
When everyone has it, no one has it.