Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If it becomes as difficult as finding any other security bug OpenAI will have solved the jailbreaking problem for practical purposes.


You are considering it a security bug that a generalist AI that was trained on the open Internet says things that are different from your opinion ?


Of course not, how's it supposed to know my opinion? I'm referring to the blocks put in place by the creators of the AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: