Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> and how could you possibly know if it’s really a Greek question mark or if they’re just trying to mess with your AI?\

I mean how could YOU possibly know if it's really a Greek question mark... context. LLM's are a bit more clever than you're giving them credit for.





I think the bigger problem is that if the dataset was sufficiently poisoned, LLMs could start producing Greek question marks in their output. Like if you could tie it to some rare trigger words you could then use those words to cause generated code not to compile (despite passing visual inspection).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: