This is the rather more interesting part of the article; not just claiming some magic task AI can do, but explaining how it does it, and what it bases its decisions on. This means that in theory, someone can validate whether this approach is actually true, or if (as others have pointed out) the AI is just reading something else off those images that we're not aware of.
> Until recently, a model like the one Menon’s team employed would help researchers sort brains into different groups but wouldn’t provide information about how the sorting happened. Today, however, researchers have access to a tool called “explainable AI,” which can sift through vast amounts of data to explain how a model’s decisions are made.
> Using explainable AI, Menon and his team identified the brain networks that were most important to the model’s judgment of whether a brain scan came from a man or a woman. They found the model was most often looking to the default mode network, striatum, and the limbic network to make the call.
This (feature or attention maps) is BS. A real explanation would show what factors were being used to make the determination. Claiming the model is "looking at" something has no explanatory power. We already knew it was looking at the picture. These heat maps are great for showing people what they want to see and making wishy washy claims like this but they really don't explain anything in the commonly understood sense of telling us what is different.
I disagree. It clearly tells us _what_ is different, it just doesn't tell us _how_ it differs from case to case. So it's not a full explanation (which would, as you note, require a model of how it differs and, optimally, _why_) but it is a step towards a explanation, and not to be sneered at.
> Until recently, a model like the one Menon’s team employed would help researchers sort brains into different groups but wouldn’t provide information about how the sorting happened. Today, however, researchers have access to a tool called “explainable AI,” which can sift through vast amounts of data to explain how a model’s decisions are made.
> Using explainable AI, Menon and his team identified the brain networks that were most important to the model’s judgment of whether a brain scan came from a man or a woman. They found the model was most often looking to the default mode network, striatum, and the limbic network to make the call.