You can verify the reduction of the problem space. Think of it this way, if data has some property, for example, it's mirrored on an axis. If X is a data point, then so is -X.
Well, a model that is aware of this symmetry only has half as much data to look at and one less thing to learn.
But that's only half of it. Truth is, assuming symmetries works pretty well even if the assumption is wrong. Why? Generalization. A model with less data will generalize more (better is perhaps debatable, but it will definitely generalize more)
This is the basic idea behind "geometric deep learning". There's loads of papers, but here's a presentation.
Well, a model that is aware of this symmetry only has half as much data to look at and one less thing to learn.
But that's only half of it. Truth is, assuming symmetries works pretty well even if the assumption is wrong. Why? Generalization. A model with less data will generalize more (better is perhaps debatable, but it will definitely generalize more)
This is the basic idea behind "geometric deep learning". There's loads of papers, but here's a presentation.
https://www.youtube.com/watch?v=w6Pw4MOzMuo