Thank you for your welcome, I must have been lurking here for around 30 years or more (always changing accounts). Anyway in this specific case, since M = Max(X,X) = X you can't have F(M) = F(X)*F(X) = F(X) except when F(X) in {0,1}, so the independence property is essential. Welcome fellow Lisper (for the txr and related submission) and math inspired (this one and another related to statistical estimation) with OS related interest (your HN account), OS are not my cup of tea but awk is not bad).
In another post there are some comments between topology and deep learning. I wonder if there is a definition similar to dimension in topology which would allow you to estimate the minimal size (number of parameters) in a neural network so that is able to achieve a certain state (for example obtaining the capacity to one shot learning with high probability).
Yes independence is absolutely an assumption that I (implicitly) made. It's essential for the convolution identity to hold as well, I just carried through that assumption.
We share interest in AWK (*) then :) I don't know OS at all. Did you imply I know lisp ? I enjoy scheme, but used it in anger never. Big fan of the little schemer series of books.
(*) Have to find that Weinberger face Google-NY t-shirt. Little treasures.
Regarding your dimensions comment, this is well understood for a single layer, that is, for logistic regression. Lehmann's book will have the necessary material. With multiple layers it gets complicated real fast.
The best performance estimates, as in, within realms of being practically useful, largely come from two approaches, one from PAC-Bayesian bounds, the other from Statistical Physics (but these bounds are data distribution dependent). The intrinsic dimension of the data plays a fundamental role there.
The recommended place to dig around is JMLR (journal of machine learning research).
Perhaps your txr submission suggests a lisp flavor. The intrinsic dimension concept looks interesting, also the V.C. dimension, but both concepts are very general. Perhaps Lehmann's book is: Elements of large sample theory.
I meant Lehmann's Theory of Point Estimation, but large sample theory is a good book too. The newer editions of TPE are a tad hefty in number of pages. The earlier versions would serve you fine.
The generic idea is that smaller these dimensions, easier the prediction problem. Intrinsic dimension is one that comes closest to topology. VC is very combinatorial and gives the worst of worst case bounds. For a typical sized dataset one ends up with an error probability estimate of less than 420. With PAC-Bayes the bounds are atleast less than 1.0.