For variational autoencoders (one baseline technique for this sort of thing) if you make your neural network have one layer without a nonlinearity, and train it, it ends up minimizing the same objective as PCA (i.e. finding the eigenfaces).
I believe this is also true of GANs where you similarly restrict the generator and discriminator to be very simple.
I bet there is a nonlinear non-NN approach that could perform well, but we may not have the investment in hardware, well-optimized algorithms, etc to train big models fast.
edit: here's a paper that connects GAN to PCA in a simple case, among many other things. not the easiest to follow, though.
For variational autoencoders (one baseline technique for this sort of thing) if you make your neural network have one layer without a nonlinearity, and train it, it ends up minimizing the same objective as PCA (i.e. finding the eigenfaces).
I believe this is also true of GANs where you similarly restrict the generator and discriminator to be very simple.
I bet there is a nonlinear non-NN approach that could perform well, but we may not have the investment in hardware, well-optimized algorithms, etc to train big models fast.
edit: here's a paper that connects GAN to PCA in a simple case, among many other things. not the easiest to follow, though.
https://arxiv.org/pdf/1710.10793.pdf