The usual way to do Translation Invariance is with the structure of the network resulting in what's called a convolutional neural network (conv+pool actually achieves this).
There have been papers about scale/rotation invariant convnets (again at the structure level) and also Networks that learn invariances without encoding them into the structure.
> There have been papers about scale/rotation invariant convnets (again at the structure level) and also Networks that learn invariances without encoding them into the structure.
The former I am very interested in! Do you have any links?
There have been papers about scale/rotation invariant convnets (again at the structure level) and also Networks that learn invariances without encoding them into the structure.