Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Emergent, as in, "we cannot explain how this works," yes. That is nothing new in the field of ML or to anyone who has been paying attention.



https://youtu.be/StLtMcsbQes

I think you have your head in the sand and haven’t been paying attention.

The scaling laws are not expected. The capabilities of GPT-3.5 are beyond what even those deeply involved had expected.

I also think the progress is likely going exponential at this point. Multi agent and recursive prompting are coming soon.

This is really not ML at all. I have extensive traditional ML knowledge and background. I know in detail the typical model suspects on a Kaggle board.

LLMs are totally new and surprising relative to my many decades working with ML and traditional NLP.


That's a good talk.

I'm paying attention. I think "scale is all you need" is wrong even when it's right. We have a responsibility to not allow the capabilities to outstrip our ability to understand and control. If we don't do our job that will be the real "bitter lesson."

However, ultimately it's a text predictor driven by a PRNG and I stand by my statement; I think the systems are obviously impressive but the unrealistic expectations people have and the anthropomorphization and projection I'm seeing is even more impressive. Let me know when it starts synthesizing new science or math. By then we're in trouble.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: