Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They've disbanded their superlignment group, and their staff who were able to work on the problem more deeply (STEM people) have since quit, see

https://scottaaronson.blog/?p=8047

(N.b. in the ex-employee author's pdf article mentioned above, a page dedicates his pdf to Ilya, who is also out of the company, so in context it's a pointed statement writing that dedication IMO)



This appears to project the last few years' growth continuously into the future, whereas a great many experts seem to be suggesting that we've hit a plateau.

I tend to believe the plateau theory, given how AI development has occurred over the last several decades with huge leaps forward followed by winters.


It also talks about "straight lines on a graph"...and promptly proceeds to illustrate this concept using a graph with a logarithmic scale.

Combined with the author's evident belief that facilely restating the concept of hard-takeoff singularity (as "AI can replace an machine learning engineer by...") suffices to change the nature of the basic claim he's making, I didn't see any pressing need to read further. Singularitarian sophistry is hardly novel in 2024, and at this late date retains no meaningful capacity even to entertain.


Your collection of experts seem to be different than mine, then (so I'm curious who you read instead, please let me know):

Scott Aaronson (the blogger above) is a professor and definitely is more sanguine towards the written article.

Dave Patterson (the Turing award professor of computer architecture) was interviewed last week, he said "We don't know what will happen!" or something like that: https://www.youtube.com/watch?v=YxVQsLA2ats&t=2045s

One of my own professors (a CS theoretician ) said, at an AI seminar last year, that there seem to be no known barriers left to AGI (paraphrasing).

Actually I'm personally on the fence, so while the pdf article discussed is not rigorous enough, it makes some interesting high-level arguments. One of them is that the recent growth needs to hit some threshold - this argument is not a continuous argument the way you had mentioned.


> disbanded their superlignment group

They were, for all practical purposes, acquired. The alignment group served its role in underligning that OpenAI was building Really Serious Stuff. There's no marginal benefit to having them around at this point, given their PR purpose has been fulfilled.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: