Understanding the output of an LLM is similar to the output of a translater.
If the recepient doesn't/can't understand it, all bets are off.
Say you don't understand python but have an LLM write some for you, but you have no way of knowing what it's doing.
What if you have a malicious LLM hosted somewhere and it writes malware insatead of what you asked for.
If you don't understand the output you end up with, you run it and it pwns your network.
Understanding the output of an LLM is similar to the output of a translater.
If the recepient doesn't/can't understand it, all bets are off.
Say you don't understand python but have an LLM write some for you, but you have no way of knowing what it's doing.
What if you have a malicious LLM hosted somewhere and it writes malware insatead of what you asked for.
If you don't understand the output you end up with, you run it and it pwns your network.