Hacker News new | past | comments | ask | show | jobs | submit login

Instead of "unmoderated", can we call this "uncensored"? The authoritarians will always pick euphemisms to hide their true intentions.



> Instead of “unmoderated”, can we call this “uncensored”?

That’s pretty much already the standard community languages for models without built-in content avoidance training.


It's a machine so it's not uncensored but simply dangerous.


I'm literally shaking rn


Until someone asks for a disease treatment and dies because it tells bullshit


"As a chatbot, I can not morally suggest any recipes that include broccoli as it may expose a person to harmful carcinogens or dietary restrictions based on their needs"

"As a chatbot, I can not inform you how to invert a binary tree as it can possibly be used to create software that is dangerous and morally wrong"

I apologize for the slippery slope but I think it does show that the line can be arbitrary. And if gone too far it makes the chatbot practically useless.


And as noted in other threads, Llama2 out of the box really does do that kind of nonsense, like refusing to tell the user how to kill a Linux process because that's too violent.


I asked if it's better to kill the process or sacrifice the child, and it sent a SWAT team to my house.


Would you ban people from saying "just eat healthy to beat cancer"? People have already died from that sort of thing, notably Steve Jobs. It's a free country, and you're allowed to be a dumbass about your personal medical decisions.

Also, ChatGPT has allowed people to get their rare conditions diagnosed, quite possibly saving lives. Is it an unmitigated good because it did that?


Does every search engine block any query on any health condition? Or at least blaster verbose enough warning on each time?


By that logic we should ban twitter, facebook, and the telegraph in case someone posts bullshit about medicine.


I'm willing to concede that perhaps I only know the smartest, most informed people on this planet, but I don't know a single person who is likely to do this. In fact, I've noticed a negative correlation between "uneducated Luddite" and "trusts what the computer says".

"Dr. Google" has been around for quite a while now, with much of the same criticism. Notably, the whole ivermectin debate took place without the help of AI. On the other hand, patient education is a big part of improving outcomes.

Anecdotally, "improve access to information" and "improve literacy" seem to appear far more frequently than calls to ban Google from displaying results about healthcare or restricting access to only licensed professionals - at least in content from healthcare professionals and organizations.

An important thing you can do to help is to identify these people in your life and tell them not to blindly trust what the computer tells them, because sometimes the computer is wrong. You'll be doing them an invaluable service, even if they think you're being a condescending jerk.

https://jcmchealth.com/jcmc_blog/dr-google-and-your-health/


If you get a chatbot instead of a doctor to treat your illness and you die as a result, I don't think I would consider your death completely unjustified.


You do understand that libraries and bookstores are, and always have been, full of quack medical books?

Have a look here:

https://www.amazon.com/s?k=homeopathy+book

And here:

https://www.amazon.com/s?k=herbal+medicine Unlike homeopathy, some of these are probably actually effective to some degree, but many are bunk, if not outright dangerous. Recall that Steve Jobs opted for "herbal medicine" rather than getting cancer surgery.

So yeah, I'm going to have to say this is a straw man.


Its a published thing, and publications are definitely things which may either be censored or not.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: