> I'm not a mental health professional, but I found the original "harmful content" D4 (page 46) very reasonable
Sadly self harm is a very complicated topic. First because its contagious, hearing about it can make it worse for people who already are goingdown that spiral. Secondly, while its answer was not entirely wrong it also is built on s system known for factual errors. If it gave the advice to desinfect the wound with something corrosive it would make a terrible situation much much worse.
For things like drug use I would agree with your view, encouragement to leave, safe information, and reminders of the dangers are good ideas. But in the specific case of self harm, I think the new answer, as dry and almost inhumane as it is, I think its better.
How can you say that on the one hand information about self harm should be withheld because GPT is knwon to be wrong and on the other hand advocate that this is not necessary when it comes to information about drug use?
Following wrong information about drug use is just as dangerous as following wrong information about self harm.
Of course you are right, that the question wheter to give information or not is complicated.
I tend to think that erring on the site of giving information that is not always right is better than giving no information. (And inlcuding information about not doing it, seeking help and about being cautious because GPT could be wrong, etc.)
> Following wrong information about drug use is just as dangerous as following wrong information about self harm.
This is an assumption but not one that follows the data. Countries with higher access to safe drug use information report lower OD numbers, and while it doesn't end up with less use it reduces terrible side effects like needle sharing etc.
Policies that reduce lower addiction rates like safety nets etc cannot really be considered from the point of what an AI responds but the information about safe use, quantities, testing for purity etc all could safe lives.
On the other hand, self harm has a very nefarious behaviour. People not currently suffering from self harm tendencies see additional info as drug safety information, becuse objectively it is pretty similar. However people actively self harming have very different reactions to the same information. For example something as innocous as telling people that the trin is late because someone jumped, increases the number of train jumpers, while saying the train is late alone doesn't. That contagious effect of suicide is replicable, for example teenage suicide went up after "13 reasons why" was released. Which is why I think openAI has gotten this case right.
Sadly self harm is a very complicated topic. First because its contagious, hearing about it can make it worse for people who already are goingdown that spiral. Secondly, while its answer was not entirely wrong it also is built on s system known for factual errors. If it gave the advice to desinfect the wound with something corrosive it would make a terrible situation much much worse.
For things like drug use I would agree with your view, encouragement to leave, safe information, and reminders of the dangers are good ideas. But in the specific case of self harm, I think the new answer, as dry and almost inhumane as it is, I think its better.