This is very dismissive of the concerns around model censorship. I should be able to ask my LLM about any event in history and it should recall the information it can to the best of the ability. Even Tiananmen square.
This is just a machine trained by humans. What you expected that? Do you think it teaches you a way to commit crime or something else? Do you think you can talk freely about everything in here? Will they allow that? Your nonsense question is about politics or gossiping with a machine, not people's problems, and no one cares.
If I ask my LLM how to plan and commit a crime, it should do that. It should not say “sorry, that is outside my current scope”, because that’s not what I asked it to do.
The LLM is being incorrect at this point, because it is not predicting the next token accurately anymore.
Politics is not nonsense. You are the one speaking nonsense by suggesting that someone else should have the right to control what you can say to a machine.