Hacker News new | past | comments | ask | show | jobs | submit login

The problem is that people could be impressed and use it for things where 0.01% could lead to people getting hurt or even get killed.



I call that problem "Doctor, doctor! It hurts when I do this!"

If the risk exists with AI processing this kind of data, it exists with a human processing the data. The fail-safe processes in place for the human output need to be used for the AI output too, obviously - using the AI speeds up the initial process enormously though


The problem is liability.

Who is liable if the AI makes errors?


I feel like the answer, at least for the current generation of models, is the user is liable.

Who is held liable if I use broken code from stack overflow, or don't investigate the accuracy of some solution I find, or otherwise misuse information? Pretty sure it's me.


It depends on the richness of the AI


Yeah, but that's probably not your own calendar reminders...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: