Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Reasons to Worry about AGI (florentcrivello.com)
5 points by adayeo on Oct 10, 2022 | hide | past | favorite | 4 comments


  We're ahead of even the most optimistic timelines — what's happening with transformers is taking everyone by surprise.
  Humans are famously bad at dealing with exponentials (we saw how this played out with covid), and there is no fire alarm for AGI.
  The folks closest to the problem are the least able to worry.
The loop is so tight now between research work and pop-culture imagining their own version of what the research implies. It seems like one consequence of that is that there is more stuff that people have a lay-understanding of, and that understanding is more detached from reality than ever before.

Maybe 20 years ago, someone would say "hey I know about this" rarely, but when they did they'd have some actual insights and understanding of the topic. Now everybody "knows" about popular tech topics like transformers or lol "exponential" stuff, but that knowledge is just some silly LinkedIn regurgitation that has nothing original to contribute and is completely disconnected from the reality of the topic. I see this almost daily wrt AI on HN, I assume that it's basically universal


> The folks closest to the problem are the least able to worry. I was chatting with a researcher from OpenAI the other day who told me he intellectually understood AGI risks, but couldn't feel the fear emotionally, because he was so close to the problem and saw how hard it was to get these models to do anything.

The first sentence is disingenuous in the meaning of the second sentence. That researcher is perfectly capable of being concerned. However, they’re the expert, and they’re observing first hand the “state of the art”, which they’re not concerned about. You’re caught up in your own imagination assuming that AGI must be taking off that you think you get to criticize experts’ calm rationale.


> We're ahead of even the most optimistic timelines — what's happening with transformers is taking everyone by surprise.

Could you elaborate? I’m aware that transformers are useful but Dall e took millions of investment dollars to train and is nonetheless still capable of giving laughably bad results.

Anyone who has read Tegemark or Bostrom is already expecting a fast take-off. But I don’t think we’ve demonstrated an AI which is capable of designing other AI.


> Humans are famously bad at dealing with exponentials (we saw how this played out with covid), and there is no fire alarm for AGI.

You’re conflating the general public with individual STEM researchers. Of course the general public is bad at math.

This seems like woefully lazy reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: