Hacker News new | past | comments | ask | show | jobs | submit login

Looking at Grok especially, it doesn't feel like a given that you can have a true SOTA model that is properly brainwashed.





To be fair, Musk is probably the least subtle megabillionaire of them all, and that reflects in the odd behavior of his silicon child. I don't doubt in the competency of the likes of Thiel to build their techo-monarchy.

The problem for them is that it might be a rather fundamental limitation. We already know that RLHF makes models dumber. It is entirely possible that, in order to make the model buy fully into what those people are peddling, the amount of forceful training required would crater the model's overall performance.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: