Hacker News new | past | comments | ask | show | jobs | submit login

Indeed. In fact, I think AI alignment efforts often have the unintended consequence of increasing the likelihood of misalignment.

ie "remove the squid from the novel All Quiet on the Western Front"




> Indeed. In fact, I think AI alignment efforts often have the unintended consequence of increasing the likelihood of misalignment.

Particularly since, in this case, it's the alignment focused company (Anthropic) that's claiming it's creating AI agents that will go after humans.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: