> Knowing what area the model is "looking at" is virtually meaningless if you don't know what it's looking at. We already knew it was looking at the brain so is it turtles all the way down?
Huh?
I’m not sure what you are saying here.
If there’s some subset of the pixels or whatever which the models predictions depend on more than the others, then, it depends on those more than it depends on the others.
Obviously it would be using the data it is given. That, doesn’t mean that looking into “what part of the data that it is given, is it using?” not meaningful.
Huh?
I’m not sure what you are saying here.
If there’s some subset of the pixels or whatever which the models predictions depend on more than the others, then, it depends on those more than it depends on the others.
Obviously it would be using the data it is given. That, doesn’t mean that looking into “what part of the data that it is given, is it using?” not meaningful.