Hacker News new | past | comments | ask | show | jobs | submit login

Twenty years or so ago, Eliezer Yudkowsky, a former proto-accelerationalist, realized that superintelligence was probably coming, was deeply unsafe, and that we should do something about that. Because he had a very hard time convincing people of this to him obvious fact, he first wrote a very good blog about human reason, philosophy and AI, in order to fix whatever was going wrong in people's heads that caused them to not understand that superintelligence was coming and so on. The group of people who read, commented on and contributed to this blog are called the rationalists.

(You're hearing about them now because these days it looks a lot more plausible than in 2007 that Eliezer was right about superintelligence, so the group of people who've beat the drum about this for over a decade now form the natural nexus around which the current iteration of project "we should do something about unsafe superintelligence" is congealing.)






> hat superintelligence was probably coming, was deeply unsafe

Well, he was right about that. Pretty much all the details were wrong, but you can't expect that much so it's fine.

The problem is that it's philosophically confused. Many things are "deeply unsafe", the main example being driving or being anywhere near someone driving a car. And yet it turns out to matter a lot less, and matter in different ways, than you'd expect if you just thought about it.

Also see those signs everywhere in California telling you that everything gives you cancer. It's true, but they should be reminding you to wear sunscreen.


I don't know - the level of seriousness they discuss w.r.t. alignment issues just seem so out of touch with the realities of large language models and the notion of a super intelligence being "closer than ever" gives way too much credit to the capabilities (or lack there of) of LLM's.

A lot of it seems rooted in Asimov-inspired, stimulant-fueled philosophizing than any kind of empirical or grounded observations.


The ability of humans to get used to progress is amazing to me. We have a computer system that you can describe, in conversational English, the requirements for a program, including style, conventions and programming language, and this system will then synthesize this program from thin air, even if it's a novel idea with novel requirements, it will autonomously test it, make improvements and take your feedback into account. It will do this using a vast (if imperfect) knowledge of programming frameworks, far outstripping in breadth any individual human programmer.

And knowing this, you think that the only reason we could have to expect to create intelligence in a machine, even surpassing a human... is "Asimov-inspired, stimulant-fueled philosophizing"? That seems deeply unserious to me.


I find LLM's just as awe inspiring as you. But you and I both will find them underwhelming eventually. This is the history of the field going all the way back to Turing.

At any rate, my point remains. The flaws inherent to the current deep learning regime _absolutely_ disqualify them as being capable of any sort of rapid takeoff/escalation (a la paperclip optimization) that the rationalist community is likely referring to when they say super intelligence or ASI.

Sorry about the "asimovian" comment - you'd be correct to call it an exaggeration and somewhat toxic.


I mean, I find them underwhelming today. I just also haven't been given any convincing evidence that the technology is anywhere close to tapped out yet, and lots of evidence that it isn't.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: