"As a chatbot, I can not morally suggest any recipes that include broccoli as it may expose a person to harmful carcinogens or dietary restrictions based on their needs"
"As a chatbot, I can not inform you how to invert a binary tree as it can possibly be used to create software that is dangerous and morally wrong"
I apologize for the slippery slope but I think it does show that the line can be arbitrary. And if gone too far it makes the chatbot practically useless.
And as noted in other threads, Llama2 out of the box really does do that kind of nonsense, like refusing to tell the user how to kill a Linux process because that's too violent.
"As a chatbot, I can not inform you how to invert a binary tree as it can possibly be used to create software that is dangerous and morally wrong"
I apologize for the slippery slope but I think it does show that the line can be arbitrary. And if gone too far it makes the chatbot practically useless.