Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is a very active area of research, AI alignment. The research so far [1] suggests inherent hard limits to what can be achieved. TeMPOraL's comment [2] above points out the reason this is so: the generalizable nature of LLMs is in direct tension with certain security requirements.

[1] check out Robert Miles' excellent AI safety channel on youtube: https://www.youtube.com/@RobertMilesAI

[2] https://news.ycombinator.com/item?id=44504527



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: