> Why do you think an AI couldn't do better than a human, when we have ample evidence of computers/AI exceeding humans in many areas?
I was specifically referring to the ability of discerning between accurate content and nonsense. SOTA LLMs today produce nonsensical output themselves, partly due to their training data being from poor quality sources. Cleaning up and validating training data for accuracy is an unsolved and perhaps unsolvable problem. We can't expect AI to do this for us, since this requires judgment from expert humans. And for specific applications such as healthcare, accuracy is not something you can ignore by placing a disclaimer.
Many human 'experts' produce nonsensical data. Verification of data by humans is also mostly based on 'prior' data. We've had many popular medical practices over the years developed and supported by medical experts which turned out to be completely wrong.
The main thing missing right now, imo, is the ability for LLMs to verify data via experimentation, but this is completely solvable.
I was specifically referring to the ability of discerning between accurate content and nonsense. SOTA LLMs today produce nonsensical output themselves, partly due to their training data being from poor quality sources. Cleaning up and validating training data for accuracy is an unsolved and perhaps unsolvable problem. We can't expect AI to do this for us, since this requires judgment from expert humans. And for specific applications such as healthcare, accuracy is not something you can ignore by placing a disclaimer.