I kind of wonder if maybe they look for certain words in the output (or run it through some sort of sentiment analysis) and if it fails they submit the prompt again with a very strongly worded system prompt (after your prompt) instructing it to reject the command and begin with the phrase “As an AI language model”.
Like, I haven’t heard about a way they could actually implement filters this powerful “inside” the model, it feels like it’s probably a less elegant system than we’d imagine.
Like, I haven’t heard about a way they could actually implement filters this powerful “inside” the model, it feels like it’s probably a less elegant system than we’d imagine.