Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of Deepmind's goals is AGI, so it is tempting to evaluate their publications for progress towards AGI. Problem is, how do you evaluate progress towards AGI?

https://deepmind.com/about

"Our long term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI)."



AGI is a real problem but the proposed pace is marketing fluff -- on the ground they're just doing good work and moving our baselines incrementally. If a new technique for let's say document translation is 20% cheaper/easier to build and 15% more effective that is a breakthrough. It is not a glamorous world redefining breakthrough but progress is more often than not incremental. I'd say more so than the big eureka moments.

Dipping into my own speculation, to your point about how to measure, between our (humanity's) superiority complex and with how we move the baselines right now I don't know if people will acknowledge AGI if and until it's far superior to us. If even an average adult level intelligence is produced I see a bunch of people just treating it poorly and telling the researchers that it's not good enough.

Edit: And maybe I should amend my original statement to say I've never heard a researcher promise me about AGI. That said that statement from DeepMind doesn't really promise anything other than they're working towards it.


Shane Legg is a cofounder of DeepMind and an AI researcher. He was pretty casual about predicting human level AGI in 2028.

https://www.vetta.org/2011/12/goodbye-2011-hello-2012/

He doesn't say so publicly any more, but I think it is due to people's negative reaction. I don't think he changed his opinion about AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: