Hacker News new | past | comments | ask | show | jobs | submit login

> lack of acknowledgment that maybe we don't have a full grasp of the implications of AI

And why single out AI anyway? Because it's sexy maybe? Because if I had to place bets on the collapse of humanity it would look more like the British series "The Survivors" (1975–1977) than "Terminator".






Maybe the effort to make machines that are as smart and as capable as possible (the professed goal of the leaders of most of the AI labs) should be singled out because it actually is very dangerous.

For what it's worth (not much), they were obsessed with AI safety way before it was "sexy".

Yes, people worried about AI started the Berkeley rationality movement (or whatever you want to call it) in the hope that it would help people become rational enough to understand the argument that AI is a very potent danger.

Some people who fit the description above are Eliezer Yudkowsky and Anna Salamon.

Eliezer started writing a sequence of blog posts that formed the nucleus of the movement in Nov 2006 (a month after the start of Hacker News).

Anna started working full time on AI safety in Mar 2008 and a few years later became the executive director of a non-profit whose mission was to try to help people become more rational. (The main way of its doing so has been in-person workshops IIUC.)


Thanks. I don't know their history. (They've only come up on my radar since a string of murders in the news earlier this year.)



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: