Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do we prepare for super human intelligence? Do you think that the AI will also develop its own motives? Or will it just be a tool that we're able to plug into and use for ourselves?


In machine learning, there’s a long term trend towards automating work that used to be done manually. For instance, ML engineers used to spend a lot of time engineering “features” which captured salient aspects of the input data. Nowadays, we generally use Deep Learning to learn effective features. That pushed the problem to designing DNN architectures, which subsequently led to the rise of AutoML and NAS (Network Architecture Search) methods to save us the trouble. And so on.

We still have to provide ML agents with some kind of objective or reward signal which drives the learning process, but again, it would save human effort and make the process of learning more dynamic and adaptable if we can make machines learn useful goals and objectives on their own.


And that’s when Asimov’s Laws of Robotics come into play.


The danger is really us, the ones who might task the AI to do something bad. Even if the AI has no ill intentions it might do what is asked.


I think AI will largely remain an input-output tool. We still need to prepare ourselves for the scenario where for most input-output tasks, AI will be preferable to humans. Science is an interesting field to focus on. There is so much scientific literature for most fields that it is now impossible to keep up with the latest literature. AI will be able to parse literature and generate hypotheses at a much greater scale than any human or team of humans.


I don’t know why you think that. As soon as it is viable, some unscrupulous actor will surely program an AI with the goal of “make money and give it to me”, and if that AI is able to self modify, well that’s all that’s required for that experiment to end badly because decent AI alignment is basically intractable.


We prepare for it by domesticating its lesser forms in practice and searching for ways to increase our own intelligence.

Still, it's pretty likely to end being just a very good intelligent tool, not unlike http://karpathy.github.io/2021/03/27/forward-pass/


A lot of people at MIRI, OpenAI, Redwood Research, Anthropic etc. are thinking about this.

I think one possibility is that even a sufficiently strong Narrow AI is going to develop strong motivations because it will be able to perform it's Narrow task even better. Hence the classic paperclip maximizer idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: