> 99.9% of it will be correct, but sometimes 1 or 2 records are off. This kind of error is especially hard to notice because you're so impressed that Claude managed to do the extract task at all -- plus the results look wholly plausible upon eyeballing -- that you wouldn't expect anything to be wrong at all.
That's far better than I would do on my own.
I doubt I'd even be 99% accurate.
If it's really 99.9% accurate for something like this - I'd gladly take it.
I call that problem "Doctor, doctor! It hurts when I do this!"
If the risk exists with AI processing this kind of data, it exists with a human processing the data. The fail-safe processes in place for the human output need to be used for the AI output too, obviously - using the AI speeds up the initial process enormously though
I feel like the answer, at least for the current generation of models, is the user is liable.
Who is held liable if I use broken code from stack overflow, or don't investigate the accuracy of some solution I find, or otherwise misuse information? Pretty sure it's me.
That's far better than I would do on my own.
I doubt I'd even be 99% accurate.
If it's really 99.9% accurate for something like this - I'd gladly take it.