"As a chatbot, I can not morally suggest any recipes that include broccoli as it may expose a person to harmful carcinogens or dietary restrictions based on their needs"
"As a chatbot, I can not inform you how to invert a binary tree as it can possibly be used to create software that is dangerous and morally wrong"
I apologize for the slippery slope but I think it does show that the line can be arbitrary. And if gone too far it makes the chatbot practically useless.
And as noted in other threads, Llama2 out of the box really does do that kind of nonsense, like refusing to tell the user how to kill a Linux process because that's too violent.
Would you ban people from saying "just eat healthy to beat cancer"? People have already died from that sort of thing, notably Steve Jobs. It's a free country, and you're allowed to be a dumbass about your personal medical decisions.
Also, ChatGPT has allowed people to get their rare conditions diagnosed, quite possibly saving lives. Is it an unmitigated good because it did that?
I'm willing to concede that perhaps I only know the smartest, most informed people on this planet, but I don't know a single person who is likely to do this. In fact, I've noticed a negative correlation between "uneducated Luddite" and "trusts what the computer says".
"Dr. Google" has been around for quite a while now, with much of the same criticism. Notably, the whole ivermectin debate took place without the help of AI. On the other hand, patient education is a big part of improving outcomes.
Anecdotally, "improve access to information" and "improve literacy" seem to appear far more frequently than calls to ban Google from displaying results about healthcare or restricting access to only licensed professionals - at least in content from healthcare professionals and organizations.
An important thing you can do to help is to identify these people in your life and tell them not to blindly trust what the computer tells them, because sometimes the computer is wrong. You'll be doing them an invaluable service, even if they think you're being a condescending jerk.
If you get a chatbot instead of a doctor to treat your illness and you die as a result, I don't think I would consider your death completely unjustified.
https://www.amazon.com/s?k=herbal+medicine Unlike homeopathy, some of these are probably actually effective to some degree, but many are bunk, if not outright dangerous. Recall that Steve Jobs opted for "herbal medicine" rather than getting cancer surgery.
So yeah, I'm going to have to say this is a straw man.