I call that problem "Doctor, doctor! It hurts when I do this!"
If the risk exists with AI processing this kind of data, it exists with a human processing the data. The fail-safe processes in place for the human output need to be used for the AI output too, obviously - using the AI speeds up the initial process enormously though
I feel like the answer, at least for the current generation of models, is the user is liable.
Who is held liable if I use broken code from stack overflow, or don't investigate the accuracy of some solution I find, or otherwise misuse information? Pretty sure it's me.