> In a recent study, published in Lancet Digital Health, NIH-funded researchers found that AI models could accurately predict self-reported race in several different types of radiographic images—a task not possible for human experts.
ethnicity is what you mean. unless you are claiming, the AI's model didn't have the concept of "race" in it's training data but was able to come up with a novel classification scheme that aligns with society's concept of race.
AI confirming human bias because it was trained on it doesn't mean much.
Actually, from the quoted study, you do mean race. "In a recent study, published in Lancet Digital Health, NIH-funded researchers found that AI models could accurately predict self-reported race in several different types of radiographic images—a task not possible for human experts. These findings suggest that race information could be unknowingly incorporated into image analysis models, which could potentially exacerbate racial disparities in the medical setting."
That's why they're trying to understand how the model is flawed: race isn't biologically real, so there isn't a correlator that the system can pick up on. They are therefore looking for explanations like Google's AI that hid hints to itself using steganography in its training data (https://hackaday.com/2019/01/03/cheating-ai-caught-hiding-da...).
x-rays don't measure purely innate, genetic factors, they reflect things that are influenced by nurture as well as nature (and might, in principle, even have detectable difference based on differences in how technicians treat and react to the patient.)
That’s not true. AI can determine race from even from x-rays: https://www.nibib.nih.gov/news-events/newsroom/study-finds-a...
> In a recent study, published in Lancet Digital Health, NIH-funded researchers found that AI models could accurately predict self-reported race in several different types of radiographic images—a task not possible for human experts.