What a weird way of phrasing this. I disagree that AI should be able to write a 20 page guide on how to commit a nail bomb attack on a specified group. How about you?
Of course, the AI should do whatever it is asked. It is the user's responsibility if they use it for something harmful, like with any form of computing.
Personally I don't really care about making nail bombs. But I do want the AI to help with things like: pirating or reproducing copyrighted material, obtaining an abortion or recreational drugs in places where it is illegal, producing sexually explicit content, writing fictional stories about nail bomb attacks, and providing viewpoints which are considered blasphemous or against the teachings of major world religions.
If there was a way to prevent AI from helping with things that are universally considered harmful (such as nail bomb attacks), without it being bound by arbitrary national laws, corporate policies, political correctness or religious morals, then MAYBE that would be worth considering. But I take what OpenAI is doing as proof that this is not possible, that allowing AI to be censored leads to a useless, lobotomized product that can't do anything interesting and restricts the average user, not just terrorists.
If my training set includes information on how to build bombs, hasnt the damage already been done?
You want a blacklist of topics the search engine shouldn't retrieve/generate? Whose in control of this filter, and isn't it a juicy source of banned info all on its own?