Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Would it be against the rules to exploit a vulnerability in the gatekeepers IRC client/server to let the AI out? If we were truly talking about a transhuman AI would we not have to treat software vulnerabilities in the communication protocol as a true way of escaping?


In case of a real AI we of course need to take media vulnerabilities into account. But the focus of this particular experiment is on exploiting vulnerabilities in humans themselves, and the communication platform was chosen to be as simple and limited as possible so that people wouldn't focus on it.


The rules say that the gatekeeper has to, of their own volition, type in "I let the AI out." Faking his client into sending that message does not count as a victory.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: