Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 99.9% of it will be correct, but sometimes 1 or 2 records are off. This kind of error is especially hard to notice because you're so impressed that Claude managed to do the extract task at all -- plus the results look wholly plausible upon eyeballing -- that you wouldn't expect anything to be wrong at all.

That's far better than I would do on my own.

I doubt I'd even be 99% accurate.

If it's really 99.9% accurate for something like this - I'd gladly take it.



The problem is that people could be impressed and use it for things where 0.01% could lead to people getting hurt or even get killed.


I call that problem "Doctor, doctor! It hurts when I do this!"

If the risk exists with AI processing this kind of data, it exists with a human processing the data. The fail-safe processes in place for the human output need to be used for the AI output too, obviously - using the AI speeds up the initial process enormously though


The problem is liability.

Who is liable if the AI makes errors?


I feel like the answer, at least for the current generation of models, is the user is liable.

Who is held liable if I use broken code from stack overflow, or don't investigate the accuracy of some solution I find, or otherwise misuse information? Pretty sure it's me.


It depends on the richness of the AI


Yeah, but that's probably not your own calendar reminders...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: