Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Supposing that the advice it provides does more good than harm

That unsubstantiated supposition is doing a lot of heavy lifting and that’s a dangerous and unproductive way to frame the argument.

I’ll make a purposefully exaggerated example. Say a school wants to add cyanide to every meal and defends the decision with “supposing it helps students concentrate and be quieter in the classroom, why not?”. See the problem? The supposition is wrong and the suggestion is dangerous, but by framing it as “supposing” with a made up positive outcome, we make it sound non-threatening and reasonable.

Or for a more realistic example, “suppose drinking bleach could cure COVID-19”.

First understand if the idea has the potential to do the thing, only then (with considerably more context) consider if it’s worth implementing.





In my previous post up the thread I said that we should measure whether in fact it does more good than harm or not. That's the context of my comment, I'm not saying we should just take it for granted without looking.

> we should measure whether in fact it does more good than harm or not

The demonstrable harms include assisting suicide, there's is no way to ethically continue the measurement because continuing the measurements in their current form will with certainty result in further deaths.


Thank you! On top of that, it’s hard to measure “potential suicides averted,” and comparing that with “actual suicides caused/assisted with” would be incommnsurable.

And working to set a threshold for what we would consider acceptable? No thanks


Real life trolly problem!

If you pull the lever, some people on this track will die (by sucide). If you don't pull the lever, some people will still die from suicide. By not pulling the lever, and simply banning discussion of suicide entirely, your company gets to avoid a huge PR disaster, and you get more money because line go up. If you pull the lever and let people talk about suicide on your platform, you may avoid prevent some suicides, but you can never discuss that with the press, your company gets bad PR, and everyone will believe you're a murderer. Plus, line go down and you make less money while other companies make money off of selling AI therapy apps.

What do you chose to do?


....but if you pull the lever and let people talk about suicide on your platform, your platform will actively contribute to some unknowable number of suicides.

There is, at this time, no way to determine how the number it would contribute to would compare to the number it would prevent.


You mean lab test it in a clininal environment where the actual participants are not in danger of self-harm due to an LLM session? That is fine but that is not what we are discussing, or where we are atm.

Individuals and companies with mind boggling levels of investment want to push this tech into every corner of our lives and and the public are the lab rats.

Unreasonable. Unacceptable.


The key difference in your example and the comment you are replying to is that the commenter is not "defending the decision" via a logical implication. Obviously the implication can be voided by showing the assumption false.

I think you missed the thread here



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: