Actually, from the quoted study, you do mean race. "In a recent study, published in Lancet Digital Health, NIH-funded researchers found that AI models could accurately predict self-reported race in several different types of radiographic images—a task not possible for human experts. These findings suggest that race information could be unknowingly incorporated into image analysis models, which could potentially exacerbate racial disparities in the medical setting."
That's why they're trying to understand how the model is flawed: race isn't biologically real, so there isn't a correlator that the system can pick up on. They are therefore looking for explanations like Google's AI that hid hints to itself using steganography in its training data (https://hackaday.com/2019/01/03/cheating-ai-caught-hiding-da...).