Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
At the Far Ends of a New Universal Law (simonsfoundation.org)
183 points by digital55 on Oct 15, 2014 | hide | past | favorite | 15 comments


There is something intensely interesting here from an information theory perspective. I need to dive into this deeper, but I wrote a paper in 2008 (cited 110 times) called Survival of the Sparsest. I showed that when computational networks were permitted to evolve their connectivity (under a selective regime) they would evolve to a kind of minimum energy state with minimal network complexity (economic with no spurious connectivity). Looking at biological gene networks, I showed that this pattern of sparse connectivity showed up again and again.

For a given function, this suggests that there may only be one or two network topologies that satisfy the conditions of being both efficient and functional. This makes a good null hypothesis if you can see the input and output states and need to guess the structure of that network. If you don't see that structure, then maybe this suggests that there is some other confounding variable lurking out there. For example, maybe the network has extra connectivity because redundancy is a functional requirement and not just nice to have.

I'm guessing that there's some relationship here between root(2N) and the minimal complexity networks I was evolving. Haven't had time to digest this, but does anyone know if N refers to the number of elements N in a system, or the number of connections in an NxN matrix?

Paper http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2538912/

Lot's of meat in the supplementary http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2538912/bin/msb2...


Thank you for sharing this! Not my field at all [1], but reading through your result, I cannot help but think of L1-regularization in ensemble optimization/learning algorithms; see for instance Section 6.3 in [2]. Perhaps it's too leaky of a heuristic, but it definitely struck a chord.

[1] http://xkcd.com/793/

[2] http://face-rec.org/algorithms/Boosting-Ensemble/8574x0tm63n...


Thanks for sharing, I need to dive into that deeper when I have some relaxation time!

It's funny that you mention that because the mathematics used to describe the dynamics of artificial neural networks are the same mathematics used to describe (artificial) gene networks. In effect, a gene network is the 'brain' of the cell. In fact, it was such an effective system that life re-evolved this computational architecture with a different substrate: neurons. Functionalism at its finest! Insofar as cells communicate with one another and are composed of the exact same gene networks (albeit in different states), you basically have one giant meta neural network comprised of identical 'tiles' neural networks each 'tile' connected to their nearest neighbors.


From the article: "On the right, Majumdar and Dean were surprised to find that the distribution dropped off at a rate related to the number of eigenvalues, N; on the left, it tapered off more quickly, as a function of N2."


N is the number of variables in the system -> so considering your network problem, it should correspond to the undirected edges in the graph...


Do you think that it may be possible to predict or model the structure of a brain with similar formuli?


There's a pruning threshold in some universal-substrate compression/optimization.

Until a macro-phenomenon repeats enough to reach that threshold, lossy chaos is used to minimize state. (The sim remains lower-resolution.) Once an arrangement starts repeating enough to exceed that threshold, extra regularity is allocated to it, potentially also assisting self-reinforcement (reproduction). That is, cycles (and attention?) is focused on the interesting parts.

Compare in speculative fiction: the terrestrial complexity-limit reached in Greg Bear's Blood Music, or the 'zones of thought' in Vernor Vinge's A Fire Upon the Deep et al. (And if my hunch is right, perhaps also the 'ragtime' in Leonard Richardson's Constellation Games.)


There's a naturally destructive boundary which can automatically limit arbitrarily redundant "information" in a subset of certain types of derived objects.

Until a visible event recurs frequently, such that it reaches the pre-defined boundary, an undisclosed entity is able to take advantage of destructive entropy in order to simplify the situation, when deriving an interpretted expression for these sorts of objects. (in other words, virtual recreations are deliberately created with poor quality in seemingly repetitive situations like this) When a collection of objects is particularly repetitive for some reason, beyond the previously described limits of repetitiveness, an undisclosed entity might permit even more repititions than usual, which obviously begets even more replicas of this peculiar set of objects (continuity). This suggests that repetitive loops (and interest?) become the appealing part of a pretend virtual model that an undisclosed entity might choose to observe.

All of this is similar to the circumstances described in the following novels:

- Blood Music by Greg Bear

- A Fire Upon the Deep by Vernor Vinge

- Constellation Games by Leonard Richardson


Which one of you is the AI? :)


In all seriousness, Is this really AI?


Great article! I only wish I had more articles like it.


Quanta articles tend to be very good. They have an RSS feed: http://www.simonsfoundation.org/quanta-archive/feed


but not too many more, or they'd be interrelated and go through a phase change destroying the degree to which you like them.


Obviously that would imply a system which contains a similar distributive growth that contains the confounding variable and the strong coupling of it to other elements in the whole. Then we could redefine the selection of elements and interrelations and like it again.


If you don't mind a little math, I found this paper by one of the principals in the area (Deift) to provide more background:

http://www.icm2006.org/proceedings/Vol_I/11.pdf




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: