Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is actually a really good parallel.

Understanding the output of an LLM is similar to the output of a translater.

If the recepient doesn't/can't understand it, all bets are off.

Say you don't understand python but have an LLM write some for you, but you have no way of knowing what it's doing.

What if you have a malicious LLM hosted somewhere and it writes malware insatead of what you asked for.

If you don't understand the output you end up with, you run it and it pwns your network.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: