There is something intensely interesting here from an information theory perspective. I need to dive into this deeper, but I wrote a paper in 2008 (cited 110 times) called Survival of the Sparsest. I showed that when computational networks were permitted to evolve their connectivity (under a selective regime) they would evolve to a kind of minimum energy state with minimal network complexity (economic with no spurious connectivity). Looking at biological gene networks, I showed that this pattern of sparse connectivity showed up again and again.
For a given function, this suggests that there may only be one or two network topologies that satisfy the conditions of being both efficient and functional. This makes a good null hypothesis if you can see the input and output states and need to guess the structure of that network. If you don't see that structure, then maybe this suggests that there is some other confounding variable lurking out there. For example, maybe the network has extra connectivity because redundancy is a functional requirement and not just nice to have.
I'm guessing that there's some relationship here between root(2N) and the minimal complexity networks I was evolving. Haven't had time to digest this, but does anyone know if N refers to the number of elements N in a system, or the number of connections in an NxN matrix?
Thank you for sharing this! Not my field at all [1], but reading through your result, I cannot help but think of L1-regularization in ensemble optimization/learning algorithms; see for instance Section 6.3 in [2]. Perhaps it's too leaky of a heuristic, but it definitely struck a chord.
Thanks for sharing, I need to dive into that deeper when I have some relaxation time!
It's funny that you mention that because the mathematics used to describe the dynamics of artificial neural networks are the same mathematics used to describe (artificial) gene networks. In effect, a gene network is the 'brain' of the cell. In fact, it was such an effective system that life re-evolved this computational architecture with a different substrate: neurons. Functionalism at its finest! Insofar as cells communicate with one another and are composed of the exact same gene networks (albeit in different states), you basically have one giant meta neural network comprised of identical 'tiles' neural networks each 'tile' connected to their nearest neighbors.
From the article: "On the right, Majumdar and Dean were surprised to find that the distribution dropped off at a rate related to the number of eigenvalues, N; on the left, it tapered off more quickly, as a function of N2."
For a given function, this suggests that there may only be one or two network topologies that satisfy the conditions of being both efficient and functional. This makes a good null hypothesis if you can see the input and output states and need to guess the structure of that network. If you don't see that structure, then maybe this suggests that there is some other confounding variable lurking out there. For example, maybe the network has extra connectivity because redundancy is a functional requirement and not just nice to have.
I'm guessing that there's some relationship here between root(2N) and the minimal complexity networks I was evolving. Haven't had time to digest this, but does anyone know if N refers to the number of elements N in a system, or the number of connections in an NxN matrix?
Paper http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2538912/
Lot's of meat in the supplementary http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2538912/bin/msb2...