Hacker News new | past | comments | ask | show | jobs | submit login
Why Is the Human Brain So Efficient? (2018) (nautil.us)
208 points by rcshubhadeep on June 5, 2020 | hide | past | favorite | 166 comments



They aren't the same thing. They are different classes of objects, different tasks. This comparison is kind of silly.

I'd hate my computer to have the memory accuracy or the computational accuracy of my brain. I'd hate to have the creativity and inspiration of a computer.

Delete being such a nontrivial operation is probably a good thing for humans. Copy being imperfect probably has something to do with the phenomenon we call imagination. We use computers because they are complementary, not substitutive.

They're just so fundamentally different.


When we say the brain has poor computational accuracy, we’re usually talking about the conscious brain we’re aware of. But our low-level motor actions and perceptions, coordinated by the brain, require a lot of precise computation. These low-level brain computations are the thing to compare to AI, not our conscious thinking. Our conscious mind is more like low-precision software running on top of an enormously powerful computer.


> But our low-level motor actions and perceptions, coordinated by the brain, require a lot of precise computation.

That accuracy is more likely achieved through fast, analog feedback loops than precise calculation.



We don't have analog hardware. Nerves are digital. The best we can do with them is pulse-frequency modulation.


I'm not convinced real neurons are just binary summators. But regardless of that, there's also chemical transmission involved, which is an analog thing.


Right, but the end state of all that is "neuron fires" or "neuron doesn't fire", or at best "neuron fires n times per second" - a binary state.


We have some behavior that depends on very exact timing of neuron spikes (e.g. determining direction of sounds by connecting signals from both ears), so that's kind of analog - though it does get reduced into a binary state, in the end either the "detector for offset of x microseconds towards left" fires or not.


How come that in many sports people a good because they basically know everything by heart?

Prof. basball batters, for exampley, are completely baffled by prof. throwers of other sports.


It's unlikely we compute in any conventional sense, the hardware is reality, and is going to exploit every available effect that is energetically efficient.

Letting FPGA's "go" into the analog realm is an interesting window: https://news.ycombinator.com/item?id=21253267

Glial brain cells: https://news.ycombinator.com/item?id=22161192


> But our low-level motor actions and perceptions, coordinated by the brain, require a lot of precise computation.

But we don't actually do that precise computation though. try repeating any action exactly and you will see some inaccuracy.


no, you will see some variability


Accuracy is a function of variability and bias.


How do you even define accuracy for an analog process which solves for unknown formulations? Not saying you can't, but afaik there's no ground to call them accurate. It's certaintly chucking a lot of data in functional ways.


Reaction time and the physical operation of the body are things that seem obvious to me. Try to design a control system for a machine that could replicate a gymnast, for example.


Sorry, but in my book responsiveness and functionality doesn't have a thing to do with accuracy (neither does parallelism if I may throw it in there).


Well, I guess I don't know what you mean by accuracy, then. I'm thinking of the accuracy required to hit the right note in a guitar solo, or the accuracy required to time a clean & jerk correctly, or the accuracy required to track a fast-moving object with the eye.

Those things can be surpassed by machines, of course, but if you wanted to design a machine to do all of them? With human efficiency?


Or a person walking. Boston’s Dynamics is just now making robots near to animals in their walking ability.


> But our low-level motor actions and perceptions, coordinated by the brain, require a lot of precise computation.

Such as?


Identify a ball in a box of items, toss it up in the air, and then catch it. Congratulations, you've just exhibited more computational power than most computers are capable of, including some very precise physics simulation and inverse kinematics.


> memory accuracy

There are individuals with very good memories for all sorts of things, who seem to manage to reconsolidate their memories near-losslessly (at least within the confines of the mental schema they organize said memories into.) Surgeons with anatomy, lawyers and judges with case-law, etc.

At this point I’m convinced that the lossy method humans intuitively reconsolidate memories with, isn’t so much a feature of our mental architecture, as it is a part of the “operating system” we build up on top of our mental architecture—i.e. it’s a skill, something we can learn (or accidentally invent) a better approach to.

> computational accuracy

We compute ratios with extremely high accuracy/precision. Just look at a professional billiards player.

We don’t have a good mind for integer math; but you can translate most integer math problems into ratio problems, and then they become intuitively solvable to humans. (This is basically what geometry is.)


I remember when I first took a data structures course, learning things like trees and linked lists, I had a total paradigm shift with respect to how I understood my own mind.

I had never really thought about the different ways that data could be organized, and how they perform differently. I figured that since this was so basic to computer science, my own mind couldn't be doing something completely different. It might not be the same in detail as any computer data structure, but it couldn't be completely unrelated either.

I realized that data structures might make information feel different. For example, I can only tell you what the 16th letter of the alphabet is by counting from "A". I can't sing the alphabet song backwards. These are at least qualitatively characteristics of a singly linked list. The same goes for my phone number and my credit card number. I wouldn't be able to dictate them backwards, except by mentally traversing them forwards and then holding the whole number in my conscious memory as I reverse the digits, or if that's too tiring, traversing it forwards multiple times and stopping at different points.

I have many detailed memories of past events, conversations, and trivial facts, but it's hard for me to remember them on command. I need some kind of prompt to point me to the right index where I can retrieve it.

I agree a lot with the interpretation that we have a messy OS that bungles memory management and does lossy compression and a poor job of disk defrag, running on some very impressive hardware.


Recently I was thinking about how my brain answers the question "What's your favorite movie?" and how I can easily answer that question, but it's harder to answer a question like "What's your favorite movie where a gun is fired?"

It seems to me that whenever I watch a movie, if I really liked it, I check my perceived quality of the movie against the quality of my current favorite movie, and if the new movie beats the old favorite, I update the "favorite movie" pointer to point to the new movie. When someone asks "What's your favorite movie?" I just return the name of whatever the favorite_movie points to.

The question of "Favorite movie where a gun is shot" is much harder, I think, because my memories aren't really indexed that way. I can't query by "gun is shot" so I can't get the subset of movies I've seen with gun shots and pick my favorite.

To me, it seems like my brain, at least for movies, has something of a key value store, which I can scan, slowly and imperfectly, but not query with complex questions. Or, maybe, if the queries are too complex they timeout and I don't get back any results.


>The question of "Favorite movie where a gun is shot" is much harder, I think, because my memories aren't really indexed that way. I can't query by "gun is shot" so I can't get the subset of movies I've seen with gun shots and pick my favorite.

And yet as much as yours or mine, or likely most people's memories aren't indexed that way, I bet if you asked enough people you'd find someone who organized their memories in some obscure strange way that would let them answer that question immediately.


>Recently I was thinking about how my brain answers the question "What's your favorite movie?" and how I can easily answer that question,

I can hardly answer that question

I think I need to make a table of all movies I watched, with columns for different quality aspects, and then I could calculate a score for all movies. Then the favorite movie has the highest score.

Then I think a few minutes about the table, before answering that I do not really have a favorite movie


Some very interesting points.

I would like to say that the hardware is a bit of a mess as well. There are weird redundant bits of legacy hardware that aren't required any more, but nobody's bothered to remove them from the system (appendix, wisdom teeth). There are oddly paired systems (genitals combine waste removal with reproduction; the nose combines air filtering with scent detection; the mouth combines food intake and air intake/outlet). And oddly co-dependent systems (lose your sense of smell and your sense of taste takes a significant hit).


What do you mean? That sounds just like a modern CPU to me! :)


A similar example I heard was that a chess grandmaster may be able to take a look at a chessboard with a game in play and memorize the entire board immediately. But only if the board "makes sense" - all the pieces are in positions that could actually be reached in a real game.

If you take those same pieces and rearrange them willy-nilly, then this ability to instantly memorize its layout goes away.


I recall that Jeff Hawkins, when talking about his Hierarchical Temporal Memory ML model (which is supposed to be brain-like), said something like "Nature has spatial and temporal locality. Brains evolved to best store information that also has spatial and temporal locality—in other words, to recapitulate and model the natural world. To the degree that some pattern is akin to one that arises in nature, the brain can store and compute upon it easily. To the degree that a pattern is 'arbitrary'—something that cannot arise in nature—the brain finds it hard to hold into."

The moment-in-time arrangement of chess pieces on a board does not exactly have spatial or temporal locality; but if one has learned a set of mental transformation rules that let that board be translated into a narrative for how it got to be that way—then that narrative is itself something quite natural for the brain's architecture to represent.


You can even construct memory palaces which are very easy to learn. I still remember them from 10 years ago.


A surgeon might remember anatomy with great accuracy but he is unlikely to remember the details of some case law nearly as well. Our memories are associative, that is how they differ from computers. It's easy for a surgeon to remember anatomy because he has been immersed in it for a long time and it all interconnects, i.e there are a lot of associations to call up the memory. Computers on the other hand could remember 20 facts about anatomy and 20 facts about case law no problem without needing any framework to attach them to.


I don't see how any of that makes the comparison "silly". It's not like we have so many instances of computer paradigms to go around comparing.


It's like measuring a gun in units of swords. You can achieve similar ends but they aren't the same. Or the companionship of a spouse in units of pets. The fundamental unit is at the affect level, not the application. That's a very important distinction.

Any model comparing the application then is fundamentally misinformation.

It's even worse because it suggests an equivocation is possible. As if you can get X units of dogs as a substitute for Y units of spouse. You just have to find the magic ratio!

Wrong framings can stick people in insolvable problems well, sometimes for generations. It's really bad.

They're fundamentally different and pretending otherwise is just going in the one of the most useless directions possible


Exactly. We shouldn't be so arrogant in thinking the modern computer is the same thing as a human brain.

This comparison is pointless. The human brain is beyond comprehension. Computers are just logic calculators.


>The human brain is beyond comprehension.

Many things in the world were beyond our comprehension, until they weren't. I see no reason why the human brain's inner workings should evade our understanding indefinitely.


I imagine a group of dogs sitting around and asking "How are we so good at thinking about fun ways to play with squeeky toys?".

The truth is, that our ability to reason about ourselves is limited by our ability to reason. Perhaps there are aliens out there who would laugh our cognitive abilities--their's being so much better than ours.


Less complicated systems successfully reason about more complicated systems all the time. Ditto for self-reasoning. See: bootloaders, update systems, and package managers.

In order to prove that some kind of meta-cognition is inherently beyond our grasp, you don't just have to prove that the system we are attempting to reason about is more complex than ourselves, you also have to prove that the problem isn't meaningfully reducible. Otherwise we can and will eventually figure out the mental tools we need to tackle the problem, and tackle it.

The same applies to brute physical strength. Humans have no problem building machines vastly stronger, tougher, larger, more precise etc than ourselves even though narrow-minded reasoning might lead you to believe that this was impossible ("a tool can only cut something less hard/strong than itself," "a ruler can only measure less precisely than itself" etc).


I think you're describing an analogue of turing-completeness. It's not (to me) a question of whether we can reason about something: it's a question of how long it takes, and how much knowledge is involved with the process.

What you're describing sounds like asking a PDP-11 to run GPT-3. Technically possible, in the broadest sense of the word. But a computer that can run GPT-3 successfully will look at that PDP-11 in much the same way that we look at a dog playing with a chew toy.


On the contrary, I think your example proves my point quite well. I understand very little about PDP-11s and only slightly more about GPT-3's inner workings, yet I have no trouble reasoning about whether or not a PDP-11 is suitable for running GPT-3 or something even more difficult to formally reason about, say Microsoft Windows. I have a mental model of computer performance and compute requirements that simplifies the question from a difficulty of "Oh, it's Turing complete, halting problem, let's throw our arms in the air like this is an infomercial!" through "You need to understand literally everything about PDP-11s and Windows" all the way to "50 years of exponential growth is a hella large factor to try squeezing down anything by." It's a trivial question hiding in the skin of an intractable question, and it perfectly exemplifies why it's silly to believe that human cognition will forever remain intractable.

In order for a problem to forever remain in "let's throw our arms in the air like it's an infomercial" territory, it must not merely be difficult in its most pedantically defined complete form, it also must stymie the search for useful relaxations and workarounds. Nobody fears running a program on account of being unable to prove that it will halt: they just kill the program if it locks up, or (equivalently) set a timeout. Personally, I'd just avoid throwing my arms in the air like an infomercial altogether.

EDIT: substituted GPT3 -> Windows because arguments about GPT-3 and/or a set of incarnations being Turing Complete would be irrelevant to the main point.


> "a ruler can only measure less precisely than itself"

That's actually super interesting. How do you bootstrap (as it were) a ruler?

Like, assume you don't have any machine that is itself created by using a ruler (so no screws or gears, except hand cut ones).

Obviously it's possible, since we did it. But how do you do it?


Symmetry.

To make a very flat surface plate, you can grind three less-flat surface plates against each other (three because if you only have two, the common surface guaranteed by symmetry can still have curvature). To make a very precise cylinder, you can grind a less-precise cylinder in a "V" formed by two flat surfaces. To make a very regular lead screw, you can grind a less regular lead screw with a less regular reversible nut. Now you can construct a micrometer, and from there your mill/lathe and you're off to the races :)

That's how you bootstrap precision, but taking precision from a basic form and putting it into a complex form is a whole other art. These days we use numerical techniques, but historically geometric construction would have been the ticket. For instance, take a string, use a "standard twig" or something to mark 3+4+5 sections of equal length, cut & tie it into a loop, tension it into a triangle with sides of length 3, 4, and 5 using the marks, and now you've got a right angle.

Modern metrology looks a bit different because time symmetry started beating spatial symmetry. Badly. Absurdly badly. Like "you get twice the digits for the same price" badly. You get 6 digits for pennies in a quarz oscillator and I know the metrologists have chased their clock stability out to at least 19 digits, probably more by now. Also, you can just transmit the reference over the air to globally coordinate accuracy for dirt cheap, you don't even have to ship blocks of platinum-iridium in inert atmosphere. Modern metrology is basically the art of "rebasing" other types of measurement onto measurements of time because time measurements are so ridiculously great.


Screw-lathes were vital to that. Having a lathe capable of making more accurate screws can be used to make more accurate lathes. Then rulers can be made using screws. More detail here: https://www.plethora.com/insights/the-evolution-of-precision...


Most dogs seem to acknowledge that humans are better when it comes to playing with toys, otherwise why would they bring them over for humans to throw?


Ah, now. See. You're mistake is thinking you can reason better than a dog. The reason a dog brings the toy to the human is because they know that the human is better at throwing and the dog is better at fetching. Teamwork, y'see.

Now go forth and learn, and one day you too may be as smart as a dog ;-)


Maybe also dogs can do everything humans can but they decide not to because they see the stressful lives we live and want no part of that. I welcome our dog overlords.


>>The reason a dog brings the toy to the human is because they know that the human is better at throwing and the dog is better at fetching. Teamwork, y'see.

Honestly I always thought that the dog was just being diligent and making sure that its humans did his daily exercise routine by throwing a toy.


They're Made Out of Meat, as a short film:

https://www.youtube.com/watch?v=7tScAyNaRdQ


>Perhaps there are aliens out there who would laugh our cognitive abilities--their's being so much better than ours.

Yeah, but their brains would either be much bigger and/or use a lot more energy, or they will have a fundamentally different architecture (i.e. they are manufactured instead of evolved).

For the given amount of perceptions/calculations that our brain makes, and the hard constraint of being a biological process, we have pretty much fantastically efficient brains.

My computer, extremely slow when compared to the likes of DeepMind, has a power source of 750 watts, while human brains consume in average 12 watts.


One kind of efficiency which hasn't been talked about is the energy loss of things like state switching and keeping the current state enabled. I think that brains build on much more efficient primitives than the silicon transistors computer chips use and thus can perform far more computations for far less energy than a desktop CPU.

Another difference between CPUs and brains is that brains are much less general purpose. CPUs do run-time interpreting of instructions while brains process data in a more straightforward way like GPUs do. Many problems can be implemented into GPUs and they will run much faster. I'd argue that brains excel at such tasks while being harder at tasks that require lots of state to be kept around as well as conditional jumps like computing a hash function or compiling a program. CPUs excel at those tasks.


Comparing Human Brain with a CPU is misconception. In the past when we didn't have digital computers we used to compare Brain with other machines. And now with a CPU. A Brain from a primitive neuron to higher level is not comparable to any machine at all including the CPU.


Computers are mathematical concepts, Turing machines being one such concept. Whether computers are implemented using silicon, or oil, or using neurons, it doesn't really matter as we have a mathematical framework for describing abstract machines, and we can determine what is a machine, and what is not.

We did not have this mathematical framework before the age of Turing, Church, Russel, et al.

This doesn't mean that brains are very similar to CPUs, they are not, just like they were not similar to mechanical machines before.

Yet we do now have a way of studying the similarities they have.


Comparing Human Brain with a CPU is misconception. no it is not. Yeah architecturally they are very different and CPU are arguably more programmable / general and less efficient.

What does matter is whether CPUs are theoretically able to achieve all the things that a brain can do (and even more) And indeed CPUs as turing complete, programmable machine are a strict superset of what brains can do. The gap between what task and at which accuracy a brain achieve vs a CPU is decreasing each year as you can contemplate on the paperswithcode.com leaderboards. The difficulty is in software, hardware through clusterisation has arguably order of magnitude more compute than a brain has.

There are four big missing pieces to match human brain performance:

1) Matching its pattern recognition abilities I believe that current statistical learning techniques of SOTA neural networks actually outperform humans on learning continuous data. But humans outperforms by far current software at zero/few shot learning on sparse/discrete data (where gradient descent is not applicable) I believe humans have this performance edge because of 2), 3) and 4):

2) humans can encode and decode meaning with great accuracy in a high level, descriptive complete declarative language called natural languages. They are in many ways far superior to current GQL/datalog/SQL DB languages at encoding and retrieving meaning (that is an isomorphic description of a denoted thing). The field of semantic parsing (+ question answering from the parsed knowledge) is the key to general language understanding and crucially lack funding. Once machines will be able to understand language and retrieve all the knowledge of say Wikipedia, they will be able to transcend human performance on many intelligence/erudition tasks.

3) humans seems to be able to do meaningful runtime code generation.

That is you can develop on demand new solutions to new problems: such as https://www.kaggle.com/c/abstraction-and-reasoning-challenge The field of specification and implementation generation is too underfunded.

4) is the observation that 3) is probably a necessary key for unlocking 2) and that both 2) and 3) are needed to achieve this communication/feedback loop between high level semantic reasoning and statistical operations.

As we can see, humanity overfocus funding on 1) despite being the most solved of all others necessary foundation's to achieve AGI and hence, as a side effect, empirically prove that CPUs superset brains


"And indeed CPUs as turing complete, programmable machine are a strict superset of what brains can do."

This is a fundamental assertion that I do not believe you can make.

The brain cannot simulate a turing machine. It does not have infinite memory, which is a requirement for a turing machine. It can, however, stimulate a linearly bounded automata.

It is also not implicitly obvious that a turing machine can simulate a brain. The primary difficulty that I do not yet see a way around is the fact that a turing machine, which has as its control unit a finite State machine, is bound by the finiteness of those states (finiteness of representation, not of number). The brain has no such constraint. It is analog, and therefore infinite in State representation.

In my opinion, this is more akin to the P versus NP problem, and that we know what needs to be equivalent in order to say that P equals NP, but no one has proved it or disproved it yet. That's how I feel about the statement about Turing machines and the brain. I do not believe we can be dogmatic on that aspect yet either way. We may have opinions, just as we may have opinions about P vs NP, but we must also be careful about stating what is provable and what is opinion, and that is all I'm trying to do.

Of course, I am willing and very interested to gain more insight in this area, so discussion is welcome!


> The brain has no such constraint. It is analog, and therefore infinite in State representation.

This is a common misconception.

I'm sure you are aware that analog signals can be approximated by digital values -- a 10 bit ADC will read a channel to one part in 1024, etc.

You might say that even a 64 bit representation is a poor approximation of a real life signal, which is a real number with infinite precision... But it isn't.

The brain operates at about 300 Kelvin, and so there's a noise floor to all analog signals of about that times Boltzmann's constant, or 10^-20 J. If a neuron impedance is 1 ohm, and at a bandwidth of just 10 kHz, the thermal noise is about 1 nV. For a membrane potential of 100 mV, that's a maximum possible noise to signal ratio of one part in 100 million, which is 26 bits.

Now the brain could depend on the signal below the noise floor, but if so those would be extremely fragile operations, and you could get the same thing on a computer by padding your numbers with random data.


Given how robust a brain is against noise, I'd be surprised if any brain signals are more precise than an equivalent of 3-4 bits.


I agree, and I think in practice the brain's noise floor is also much higher than the theoretical thermal-noise minimum. But I guess the main point is that once we acknowledge that even 32 bits is more than enough, the difference between an analog and digital machine loses a lot of its philosophical weight.


> The brain cannot simulate a turing machine. It does not have infinite memory, which is a requirement for a turing machine.

In practice we call modern computers turing-complete even though they don't have infinite memory. The brain can simulate such a machine.

> The brain has no such constraint. It is analog, and therefore infinite in State representation.

If this mattered, then it would mean analog computers are more powerful than digital computers and therefore the Church-Turing thesis is wrong


Regarding the Church-Turing thesis, it is exactly that, just a thesis. Again, akin to P vs NP. It seems to hold for most cases, but is not proven.

The reason that it's difficult to apply in regards to the brain is that we don't exactly know how the brain is computing... or if it "computes" at all! To my knowledge, we don't have a model of computation for consciousness, emotion, free will, Etc.

Perhaps these are better classified as emergent Behavior rather than computation, but if that is the case I still don't know of a model explaining what computations or rules give rise to the emergent Behavior.

Perhaps the problem is in our definition of computation and what it means to compute.

We do know that the cardinality of the set of possible computational problems is larger than the cardinality of the set of all possible Turing machines. This is provable by simple diagonalization proofs.

The question, then, is whether or not the computations of the brain fall Within the set of Turing recognizable languages (computational problems). To my knowledge, this has not been shown.


As far as I understand, the prevailing opinion is that the brain is a physical object and that its operation does not involve currently-unknown laws of physics (because we have a good understanding of what happens at the scale of an entire atom or above).

A Turing machine can run a simulation based on such physical laws to any desired level of precision (which is enough, because as mentioned in TFA, processes in the brain aren't individually very precise). This is true because of the nature of these laws, which are mostly just asking you to integrate differential equations. If you accept this, then it should follow that a Turing machine can in fact simulate a brain: just run a physics sim on a brain's initial state.

(I do realize that this is far outside the realm of what's doable today, but it seems to provide a solid justification for why it's conceptually possible).


> that its operation does not involve currently-unknown laws of physics (because we have a good understanding of what happens at the scale of an entire atom or above)

Well, we know certain approximations of those laws. Purely theoretically, it is possible that the exact laws at some level of detail that we have not yet been able to observe involve functions that are not computable by a Turing machine, and then it is theoretically possible that the brain itself is computing functions which are not computable by a Turing machine (this would of course assume that the Church-Turing thesis is actually wrong).

As long as the Church-Turing thesis is not proven, we can't say with absolute certainty that the physical world can be simulated to any level of detail by a Turing machine.

Furthermore, even if the Church-Turing thesis was proven, is it possible that the physical world involves transformations that are not even computable at all (even if they can be approximated by computable functions)?

Just to be clear, I do not believe these things. But it is fun to think about the limits of our knowledge.


"any desired level of precision" is actually the issue. The moment you choose a level of precision, you cease being accurate (at that level). If you make the argument that a TM has infinite memory, and can therefore represent an infinite precision, then I would counter that our current defintion of a TM requires a finite tape alphabet (and finite number of states), which is part of the TM's known computational limitations. And, of course, the moment that you use any finite set of symbols to represent an infinitely precise value, you fall into the problem that the set of real numbers has a larger cardinality than the set of possible turing machines (again, simple proof via diagonalization).

It is possible that the brain's imprecision (I would argue that "inconsistency" might be a better word) is a requirement of it's computational ability. Again, we haven't defined how the brain computes, nor do we have a model for explaining its computation, encoding or representation of knowledge, or emergent behavior. We have observed phenomena related to some of these things, but we are far from understanding it. It may be that the computational processes are dependent on the surrounding environment. We know that the biological processes are influenceable by the physical world, but we do not know much about how these external forces affect, limit, or are required for, the process of brain computation.

The quantum world may play a part in consciousness (or no, we don't know). Non-determinism may play a part. It is possible that, in order to simulate a brain, one would have to simulate the entire universe around it in order to predict the behavior... meaning that it may well require a universe to perform the simulation.

Which brings us to a related theory of whether or not we are living in a simulation, but I digress... :)


> It is possible that the brain's imprecision (I would argue that "inconsistency" might be a better word) is a requirement of it's computational ability

Is it possible that brain is in fact a quantum computer? I can imagine that under all those neural networks there is a small part where, trapped in some complex protein structure, some qbits exist and are crucial to most advanced brain functions, such as consciousness.


"Is it possible that brain is in fact a quantum computer?"

It's an interesting thing to ponder.

Quantum computing is still just another computational model, and it's main Advantage is that it involves non determinism. But non determinism, in and of itself, can be modeled by deterministic computer.

I think the biggest problem is that we don't understand what computation is taking place in the brain, or even if it is "computation" according to our current definition of the word. I think that this issue is the biggest problem in reconciling whether or not it is possible to accurately model the human brain.


Isn't the recent Google quantum "supremacy" experiment evidence against the extended Church-Turing thesis?


No, quantum computers as we understand them can be simulated by a turing machine


The extended Church-Turing thesis which I specifically referred to concerns efficient simulation, not just whether it can be simulated.


Google has not proved quantum supremacy, it is a scam. They have proved the truism that running a physical system is faster than running a simulation of a physical system...


https://www.nature.com/articles/s41586-019-1666-5

What part of the experiment in the paper released did you feel like was inadequate?


I mostly agree with your post but:

> The brain has no such constraint. It is analog, and therefore infinite in State

Not necessarily infinite. A lot of people believe that nothing in the world is truly infinite (just very large/small). Infinite quantities in mathematics are just approximations that simplify calculations.


If you go to a sufficiently high precision, neurons and their communication are discrete - the number of neurotransmitter molecules and ions transferred across any synapse is countable, so the number of states (even if we ignore noise and noise tolerance, which we shouldn't ignore) is finite.


The big question is whether a CPU can emulate a brain with the same or better efficiency.


Turing completeness isn’t necessarily an interesting thing to have in common. Many (very simple) models of computation are Turing complete but have vastly different properties. Take for example a cellular automata, a Turing machine, Wang tiles, (cyclic) Tag systems, Fractrans, Register machines, string rewriting systems. All of these are Turing complete. Yet they are miles apart in how they carry out computation. In order to understand and do what the brain is doing we have to figure out the brains model of computation. It will also be Turing complete but it will look very different than a Turing machine.


> CPUs as turing complete, programmable machine are a strict superset of what brains can do

In what way can this be proven?

It's very tempting in an era of tech-centered growth to think of computers as the solution to everything, but we are barely even beginning to understand the brain. We know computers fairly well and can talk about them, but how can we make such a claim when we don't know the other thing we're talking about?

In fact, the brain created the computer, didn't it? Therefore, from that standpoint it is arguable that the brain is a superset of the computer. It's not something I really believe in (because my opinion is that you can't really equate things that are of entirely different units, one of which being unknown), but just a "devil's advocate".


The argument isn't "something like, or a little better, than current CPUs can perform everything a brain can," but something more like "a turing machine can perform everything a brain can or more." This is more an ontological exercise, not an empirical one. If you reduce everything to a "black box" model with inputs and outputs, then sure, the mathematical abstractions of theoretical brains and theoretical CPUs have a congruence. Most objections to this seem to resolve around qualia being something not modelable in machines, but I'm skeptical of that claim.

Can an "arbitrarily advanced computer do everything a brain can do?" Empirically, right now, current machines can't but we are talking about "future machines, via line-of-sight extrapolation". Not fundamental leaps in tech, but incremental ones. It seems plausible, but it seems we expand the depths of the complexity of the requirements nearly as fast as we advance current capabilities. I don't know, but I'd put my money on the technology catch up.


Being skeptical of the claim that a certain qualia is not modelable in machine is just as valid as being skeptical of the exact opposite. This is exactly why I asked if there was anything beyond what the original poster said. Without it, a post based on the exact opposite assumption could have been written and considered just as valid.


Fair criticism, I didn't tackle that head-on. The following doesn't actually make a cogent argument either, but I'll elaborate that my intuition is that qualia (conceived as something nearly tangible) are more like "the soul" or "spirits" and that, as such, thinking they exist in the brain or a turing-machine is nonsense. To the extent they are more like some combination of memory and emotional-stimuli, then they just represent a particularly interesting set of internal states, but are still something that can be mathematically modeled.


> In what way can this be proven?

Proven? Nothing in science is ever proven.

But on half a millenium we have failed to find anything that can't be simulated by math, and Turing completeness means a computer can simulate anything that can be simulated by math. We also can simulate all the smallest components of a brain.

At this point the claim that math can not simulate it is highly extraordinary.


> Turing completeness means a computer can simulate anything that can be simulated by math

Technically, it is not proven that Turing machines can compute all computable functions, so there is some purely theoretical possibility that the brain could be able to compute functions that a Turing machine can't.

Personally I find that extremely unlikely, and agree that it would be extremely surprising. But it wouldn't invalidate anything we have proven so far.


It would imply that our brains are using currently-unknown physics, since all current theories are computable.


We have not been able to simulate any aspect of subjective, conscious experience using a mathematical model, and personally I think we have no good reason to believe we ever will. The qualitative, by definition, cannot be quantified.


I am not convinced of the usefulness of this comparison.

The first of your big missing pieces starts from the best that we have been able to achieve with computers so far, and while its completion might be a big step in computing, it would not necessarily be a big step in understanding the human brain - after all, quite primitive animals have impressive abilities in this regard. Using the best computing has done as the yardstick for quantifying the human brain's ability is the wrong way round.

The remaining missing pieces are vague, with no clear indication that they fit into the brain-as-CPU model. For example, while it is true that "[human languages] are in many ways far superior to current GQL/datalog/SQL DB languages at encoding and retrieving meaning (that is an isomorphic description of a denoted thing)", this vastly understates the capabilities of language. Once again, you are using current technology as the yardstick, with no basis for assuming that it is of the right scale.

Overall, you seem to be assuming that the rest of the puzzle is almost within reach. That is certainly a logical possibility, but not one with a great deal of objective evidence in support. FWIW, my opinion on the matter is that we probably don't even know, in any well-defined way, all the questions to be answered.

Even if we grant the premise that a suitably-programmed computer (not just a CPU) could have capabilities that are a superset of those of a human brain, that would not necessarily justify saying one is very like the other - that would be like saying a dynamo is a solar cell because they both produce electric current.


I agree. For some reason 2) and 3) reminded me of the book "The mating mind" https://en.wikipedia.org/wiki/Geoffrey_Miller_(psychologist)...


“programmable machine are a strict superset of what brains can do”

As others already replied, that’s a statement that isn’t universally accepted to be true.

As an example, there’s consciousness. People disagree about whether it exists, whether it’s (fully) ‘in’ the brain, and on whether computers could in theory be conscious.

There are people who answer those questions with yes, yes, and no, and, since we don’t even have a good idea about what consciousness is, one cannot reliably argue that they are wrong (also not that they are right, of course)


Before CPUs existed, we would compare brains to steam engines. There was a very interesting article posted here on HN a while ago, explaining why humans always pattern match their understanding of the "mind" (or "soul") to whatever technology is fashionable in their time: steam engines, computers, etc. It also explained the pitfalls of doing so.

I think there is at this time no indication human brains are in any way similar to CPUs. It might be interesting to consider the question, of course.


But steam engines and hydraulics and gear mechanisms are all Turing complete. There is nothing wrong with those models. You could build a brain out of any of them, unless the brain computes something that is not computable.

If the brain does something that is not computable, that's a direct challenge to some of our most established science. It is possible, but I think very unlikely.


> You could build a brain out of any of them, unless the brain computes something that is not computable.

Could you? That's sort of begging the question. We do not know if something "Turing complete" can be used to build a brain like the human brain. That's precisely the point.

> If the brain does something that is not computable, that's a direct challenge to some of our most established science.

A challenge for computational neuroscience maybe. Otherwise I don't see the challenge for neither neuroscience nor computer science. If someone wants to make the claim you can build a human brain out of something Turin-machine-like, that's an extraordinary claim, not established science.


The argument I'm responding to is one that says people are wrong about brains being computers because people always believe they can make brains out of technology of the day. My point is that all of these things are the same theory, and it is one that has not been disproven.

If a brain cannot be produced in a turing machine, it must perform some non-computable activity. That would mean physics cannot be accurately simulated in a computer, which I believe would be earth-shaking in that world. That brains can be reproduced in a simulation is a default assumption, that something composed of molecules can produce outcomes that cannot be computed is an extraordinary claim, for which, I believe, there is no evidence.


To be fair, CPUs are Turing machines. That makes them much more comparable to anything that mainly does information processing than to anything else.


I think the danger is that it's always "obvious" that the current fashionable tech works in analogous ways to the mind/brain. We can spend all day finding ways in which they are similar; for example how the brain does information processing and the CPU does too.

The point is, I think, people from the steam engine era had similar reasons why the mind/soul was exactly like a steam engine. I won't try to reproduce them here, but I'm sure there were convincing arguments at the time. Who has the awareness to claim, before the current fashionable technology becomes unfashionable, that maybe no, the brain is not a close match for an information processing machine? ;)


I thought it was about similarity of simulated neurons, not the CPU itself.


> What does matter is whether CPUs are theoretically able to achieve all the things that a brain can do (and even more) And indeed CPUs as turing complete, programmable machine are a strict superset of what brains can do.

It is not proven in any way. Turing's postulate is just a postulate, it is not even a theorem, just a conjecture. And AFAIK it cannot be proven, actually.


Is there anything analogous to software in biology?


Biology is the ultimate legacy software running on one of the oldest platforms ever developed, the organic compounds. It is literally a giant genetic algorithm to write instructions (DNA) for manufacturing molecular machines (proteins) that interact with each other in an extremely complex graph of relations (protein pathways, i.e. control flow).


That feels more to me like hardware and software in the sense of a Jacquard loom. I suppose it fits though.

I was thinking more about what's going on in the brain. We have all the regions mapped to specific functions with higher and lower level parts. The low level parts seem to be like hard-wired stimulus-response mechanisms. Are the higher level systems the same at a meta level or is there a type of program running on the hardware of the brain?


The stimulus response mechanisms are far from hard wired. The brain is plastic at all levels.


A baby doesn't need to be taught how to breath or cry. That seems pretty hard wired.

Anyway, that wasn't really what I was asking about. Is there any separation between the biological hardware of the brain and the instructions of software?


This a very simplistic view, based on assumption that the world is discrete. The whole idea of software relies on the concept of digital computer, a discrete machine. The world might indeed be analogous and real numbers might actually exist.


If world did run on real numbers that we could harness for computation I would be more than happy, because using those we would be able to perform hypercomputation. See https://en.m.wikipedia.org/wiki/Real_computation

However this is forbidden by Bekensteins bound, so unless modern physics is horribly broken it’s ruled out at least in any sense visible to us even in principle.


Not a quantum physicists, but IMO Bekenstein bound is not applicable here, because quantum laws are non-deterministic, therefore you can describe the structure of a system, but you cannot describe how it will evolve. Quantum randomness might be in the very essence of how the brain and mind works.


Quantum randomness being necessary hardly seems like it would have profound practical implications since augmenting a digital computer with a geiger counter would be trivial.


It's much easier than that. Hardware random number generators are often based on something like a reverse-biased diode. Electrons migrate across and it's entirely random when it happens. Amplify and count them and you get a great source of entropy.


DNA and proteins are obviously discrete, so even if the hardware relies on some fundamental analogous behavior, the 'software' and each hardware component can still be analyzed as discrete behavior.

However, for anything operating at human temperatures we can reasonably assume that any effective behavior can be simulated by discrete operations, as any nuances of fundamental analogousness would be drowned by thermal noise and the amount of precision that any behavior can require is rather low, much lower than e.g. any standard floating point number in a discrete CPU.


I think the assumption that software may only be digital is the limited one.


Otherwise it becomes a meaningless, all-encompassing term.


I am sure that I saw this exact message on HN before. Did you copy it from someone else or did you repost your own post?


I wrote it out ad hoc, I don’t doubt that something similar has been written before though.


The human brain isn't Turing-complete.

Turing completeness implies infinite recursion, which the brain obviously can't do.


This is a silly objection, trivially because obviously no finite physical system - brain or computer or whatever - can be constructed with the storage equivalent of an infinitely long tape. But if you allow for the fact that humans can do things like write things down and share information with other humans and build computers to store information, our information processing capacity is not limited to the set of states we can hold inside the atoms inside our head.

But also, the claim lacks evidence: We’ve never seen a human being yet whose program didn’t eventually halt.

That doesn’t mean the hardware isn’t capable of running a program that never halts, just that we haven’t found such a program yet.

Indeed if you consider human mindware as a whole, given that when humans reproduce they create new copies of the mind running in new bits of hardware... maybe Human minds are infinitely recursive after all?


Technically Turing completeness requires infinite memory for that (or an infinite tape if we're talking about the original turing machine concept), which no Turing-complete machine has. In other words, the brain is as Turing-complete as any machine that we also consider to be so. We'll always be bounded by limited memory and limited time.


We do not know if human brain is indeed Turing-complete, or even if it is a Turing machine at all. Human Mind certainly is, but if brain is or not we do not know.


While I agree that comparing a human brain or mind to a Turing machine is not helpful, the objection you make here is less significant than it first appears.

There is a subtle difference between unbounded recursion, which a Turing machine is taken to be capable of, and the actual ability to achieve infinite recursion. In no application of a Turing machine, either as an actual physical device or as a hypothetical one in a logical argument, is it ever required to perform infinite recursion, which would just be one way of not halting.

For all practical and theoretical purposes, what matters is that the machine being considered does not exhaust its ability to recurse while performing the computations being considered. Consequently, the standard practice, of saying that computers and certain other devices are Turing-equivalent, with the usually-implicit caveat of being so up to the limit of their recursive ability, is both reasonable and useful.


> For all practical and theoretical purposes, what matters is that the machine being considered does not exhaust its ability to recurse while performing the computations being considered.

You're right, and thanks for the more strict definition.

Regardless, the 'recursion limit' of the human brain is really low. (Say, seven things at once or thereabout; not going to links proofs but it's a non-controversial statement.)

Certainly not enough to implement any sort of computing machine. Human brains are notoriously bad at arithmetic and state machines.


Why is that obvious? My brain’s been infinitely recursing for years as far as I know


That's what the article does though. And there are experiments trying to simulate parts of brains but we realize that it's extremely hard to do that and we are very far away from simulating even a mouse brain.


The difference is that CPUs, unlike those other machines, can be used to model/simulate things that are similar to brains. There is impedance in the translation, of course, but that impedance can be measured as a sort of “distance” between the architectures; just like one might measure the “distance” between two Instruction Set Architectures.


"...the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim."

Edsger Dijkstra, EWD898, 1984


Whether or not it's comparable depends on the level of distinction you're trying to make. Obviously, CPUs don't think or experience the world (but on the other hand that kind of "feature" seems increasingly likely to be implementable in software, even if our current CPU architectures are rather unsuitable for that goal). However, if we're gonna talk about energy efficiency and computation performance, now that it has become evident that the brain is merely a kind of a computer, we can definitely look for parallels.


> now that it has become evident that the brain is merely a kind of a computer

I am ignorant in this area. But I keep reading how brains are nothing like computers the more we learn. Your statement seems to suggest otherwise and id love to read about it. Can you drop something where I can start exploring about how the brain has become more evident that it's merely a kind of computer? Thanks!


The brain is thought to be merely a computer in the original sense of a long strip of paper along with a scribe and a rulebook. The logic is, a Turing machine can simulate quantum electrodynamics to an arbitrary degree of accuracy. Then, two beliefs about physics and the structure of the brain are included:

1. There is nothing going on in the brain that would require simulation to infinite accuracy. Not even a chaotic system would have this property, because they take a finite time to "blow up" an initial uncertainty, and the smaller the initial uncertainty the longer they take to blow up. For this proposition to be violated there would have to be an undiscovered fininite-time nondeterministic blowup, which is unlikely, but I've heard rumblings that we haven't proven that it can't happen in Navier-Stokes. So maybe it can happen in the brain.

2. There is nothing going on in the brain that depends on nuclear physics or anything more "powerful" than quantum electrodynamics.

I have not seen any evidence that 1 or 2 aren't true for the brain, so that puts something behind saying it's "merely a computer."


If you are looking for a book for an introduction, I would suggest Mindware by Andy Clark is pretty reasonable. Pub 2014; ISBN: 9780199828159


You might be right about brains being better at certain kinds of tasks, but I don't think it's right to think of them as having only one processing mode.

Someone else mentioned "Thinking, Fast and Slow", and I find it fascinating how closely the two thinking modes in that book seem to map to CPU (mostly serial) and GPU (parallel) processing. It also claims that people have natural preferences for each mode of thinking, which is super interesting as it suggests that the tasks that brains are best at performing will vary from person to person (I guess this is obvious, but perhaps gets lost when we start comparing to computers).

I'd bet on brains getting a lot of their efficiency from tight integration of CPU-like, GPU-like, and ASIC-like, and full on analog components. We'd probably have to apply deep-learning like approaches to the hardware design itself to get close.


It's like comparing a human to a horse. A horse can run very fast or pull a wagon. But a horse can't work on plumbing, or knit.


Why is the human brain so inefficient? It takes years just for it to compute the sha-256 of this media file.



I was wondering why this seemed so outdated and ignorant for something published in 2018 (only 10b transistors? 'computers are serial', really?), but I see that it's from a 2015 textbook, using citations for computing hardware published in 2008, and presumably referencing hardware from 2007 or earlier...


"At a global level, the architectures of the brain and the computer resemble each other, consisting of largely separate circuits for input, output, central processing, and memory."

This is fundamentally wrong, even at the 10 mile high summary level.

In our brain processing and memory are not in the least bit separate, and memory distinct from processing doesn't really exist.

If you really had to make a computer analogy of how the brain works, it's more like self-modifying code where the only memory of the data flowing through it is the changes to the code that were made as the result of that prior flow.


Leslie Valiant has done some interesting work on quantifying the efficiency of the brain from the viewpoint of computer science, see e.g. https://www.youtube.com/watch?v=X9hRRh76QEA and the book Circuits of the Mind.


World record tennis serve is 144 miles an hour and a human can't really move across a court and return a ball moving at this speed. If they're lucky they can reach it and react in time to hit it. I'm a bit confused by an article that claims tennis players can react to and return serves up to 160 miles an hour. I think evidence suggests that returning balls anywhere near this fast is dependent on analysing factors before the ball starts moving, the other players body position, racket position etc. Players have an intuition about where the ball is going to go without having to look at and analyse the flight of the ball.

Just did some very basic checking. Tennis court 23m 260mph = 72m/s Ball takes approx. 0.3ms to travel length of court. human reaction time to visual simulous 0.25ms So the idea is they move and hit the ball in the remaining 0.05ms? Hmmmmm.


Serves in tennis don't go directly down the line, they go cross court. Returning players will often be standing behind the baseline. Additionally balls start ~2.5-3m above the ground, bounce and then come up again. The total distance traveled is probably closer to ~27m.

The air resistance slowing the ball down is significant - combined with the energy the ball loses bouncing, the ball has lost more than half of its initial velocity by the time it gets to the returning player.

I found these speed guns stats on a tennis forum for the ball speed at different points:

Speed after being hit: 126mph

Speed before hitting court: 89mph

Speed after hitting court: 67mph

Speed at returner's baseline: 58mph

Even after doing the calculations correctly, there still isn't a lot of time for reactions, but it is more plausible than your initial analysis suggests. (Your units also seem off - should be seconds not milli-seconds).


Thanks for this. Much better than what I did although I don't think 58 is a third of 126, more like a half.


'Players have an intuition about where the ball is going to go without having to look at and analyse the flight of the ball.'

This is pretty easily observable with baseball players as well. After playing thousands of games while standing in the same (relative) place on the field I/they can anticipate where the ball is going to go based on a variety of variables in real-time... instantly.


There was a festival of jugglers in my city and they taught me to juggle in like 15 minutes, I was amazed it's so easy (the basic 3-ball juggling, and just for a monute or two, the more difficult juggling is HARD and I had to train later to be able to keep juggling forever).

There is a very easy trick - you look forward in the distance keeping the balls in peripheral vision and there's 2 automatic reactions you have to develop:

1. when the ball going up is at the top of the curve - throw another ball up

2. when a falling ball goes out of your peripheral vision - do the "oh shit something's falling let's catch it" routine with the hand that has less balls in it.

Hands learn very quickly how to move to catch the balls that leave the peripheral vision "by itself" basing on the trajectory you've seen.

It's actually harder to juggle when you look at the balls directly, and it's impossible when you think about it and try to do the moves consciously because you're too slow.

It was mindblowing to me that it's easier to catch a ball when you don't look at it.


In the book “Thinking fast and Slow” the Baseball example is given for a learned intuition. The ball is too fast for batters to react so they learn to anticipate the trajectory of the ball from the way the pitcher throws. When they let professionals players play against a female softball player with lower throwing speed their intuition was off and they missed the ball more often than against a male professional player.


50% of the time it works 100% of the time


Almost all of the "amazing" things the brain does are basically continuously refined predictive branch execution.

Which is why practice is important. You're essentially strengthening certain neural pathways with continuous exposure to certain inputs.

But this strength also makes us susceptible to misdirection and slight of hand.


You're right that it's not possible to react if all the information you have is the trajectory of the ball after it leaves the racquet. Good players will subconciously be predetermining the path of the ball by looking at how the opponent is striking the ball.


It doesn't specify who serves it at that speed either, could be some kind of "serving machine".


I guess so. But I'd say humans would have no chance at this speed without another player to analyse visually.


The article’s articulation of what goes into returning a serve is a bit simplistic, but the underlying idea is not crazy.

* When you return a serve in tennis, you are doing so from only one side of the court. The opponent’s serve can only land in a service box that provides 13 feet of lateral space.

* Practically, there are relatively few spots in the service box that can be reached by a serve. Because of human physiology (the length of our arms, joints in the arms etc.), it would be extremely painful to try to hit a fast serve to certain parts of the service box. Either that, or the server would have to stand in atypical positions on the service line (i.e. not at the center tick) that would be a dead giveaway of where the server was trying to hit to.

* So, in simplistic terms, most tennis players are choosing between more-or-less staying in place (to return a body serve), or leaping to their left or right. The serve must bounce before you hit it, and it will be bouncing “towards you” vertically. The returner thus is very rarely going to move vertically. This usually only happens when you are moving in to pummel a slow and short serve.

* At the highest levels of tennis, the vast majority (60%+) of serves are going out wide, or down the middle (https://www.atptour.com/en/news/berrettini-infosys-serve-loc...). Mind you, these are also the same player who would have the physical conditioning and athleticism to actually be able to hit these blazing fast serves.

* Additional information for the returner is conveyed by the serve toss. Almost all players are giving away tells here. For example, if I’m a right hander serving from the deuce court, and I toss my ball to the left (the “11 o’clock position”), it’s high unlikely that I’m hitting the ball down the middle. Doing so would require one of those aforementioned contortions in my arms and legs, and I would then be unlikely to generate the power needed to strike the ball in a way that leads to a super fast serve.

* So in reality, by the time that the server is making contact between their racket and the ball, the returner will have a general idea of the direction that the ball is going in.

* The article does conflate getting your racquet on the ball, and making a successful return. Just as with any other tennis shot, there is not guarantee that your return does not go into the net, or go flying out. I think it’s a far more plausible claim that professional tennis players can get their ball on the racquet vs claiming that they can cleanly/successfully return these super fast serves.

Some other thoughts:

* Placement is just as important as speed in determining how returnable a serve is.

* For example, there are plenty of examples of top tennis players returning extremely fast serves. Federer against Isner (140 mph): https://youtu.be/5gcvLbtaNxM, Murray against Raonic (147 mph): https://youtu.be/8GYX4ZIPJsg

* The commonality between these successful returns is that the serves themselves were fast, but poorly placed. By serving right down the middle, the servers allowed Federer and Murray to take one small step, and then make good contact with the serves for an “easy return”.

* One small quibble with the “world record tennis serve” you cite. It’s not 144 mph, but rather 157.2 mph (hit by John Isner). If anything though, this is helps your argument.

* The unofficial record is 160+ MPH (hit by Sam Groth), but this was at a second tier tournament with a questionable radar gun (https://youtu.be/uKeL-W7xft0). Notice how even with this serve, the returner correctly guesses where the serve is headed, and even looks to have gotten a racquet on it.

* It’s a bit of a chicken and an egg problem as well. There is a very tiny sliver of people in the world who are physically fit enough and who possess the natural physical traits (like height and broad shoulders) necessary to hit serves in the 140+ MPH range. These people are likely playing on the ATP against the players in the world best equipped (mentally and physically) to return their serves.

* So all this is to say, returning serves in that 140-160 MPH range is a low probability proposition. Heck, a perfectly placed and well disguised serve even in the 110 MPH range can be unreturnable (as seen in two decades of Federer highlights). But, humans are indeed “capable” of returning serves in that speed range.


I wouldn't say that human brain is that efficient (per volume). Compare and contrast with the brain of rats or Corvidae: https://www.youtube.com/watch?v=ZerUbHmuY04.


It's not even a good example. Humans are about the least physically agile vertebrates on the planet.

Think of a fruit fly. It can walk, fly, forage for food, mate, etc. The entire critter has a mass of .2mg and their brains have ~135k neurons. Making the horrible assumption of linear power scaling, that's one microwatt.


but can it do math? can it paint? drink wine and muse on its own brain efficiency?


Fruit flies can drink wine.

Doing maths - well, it is a common trope that we extrapolate skills of a fraction of humans on the entire population. For an average human 1/3 + 1/2 can be problematic.

Abstract counting up 5 or so - well, many birds can do that, including pigeons.


the wine part was a joke. especially because fruit flies definitely appear to be attracted to fruit, wine, etc

i believe you are underestimating how capable humans really are. all of us can learn to do math and i’m talking serious math not basic math.


My point from above is that 'humans playing tennis' isn't a great benchmark for the efficiency of our brains.


The brain is weird. You can figure out how to split an atom, then forget your keys inside your car.


Is it though?

I think having a good metric is really hard.

For example i can have a neural net running on my smartphone doing recognition tasks.

A task the brain is typically good at due to its neural net structure while the computer basically has to simulate the net.

But still my smartphone can mark all the faces in a crowd multiple times over in a time i can not recognize even a single person.

And that with a camera way beyond the capabilities of the human eye.

Modern smartphone processors draw around 1 or 2 watt max. So is my phone more efficient at doing this?

One could argue that my brain does other stuff at the same time like controlling heartbeat and what not but my phone has to keep the wifi, clock and so on too.

The truly impressive part is the ability of the brain to do completely generic problem solving for basically everything; while running on 10 watt. With the added ability to learn a few activities to a really high level.

Its is not efficient at doing a singular thing it is efficient doing everything at once.


Yes but for integrating information, your brain is marvelous. Somebody in the crown laughs or moves a certain way or you catch a sniff - and BAM you found your person.

Any automated single-skill system might be more efficient, but of course it becomes useless outside its parameters. Put a hat on those people in the crowd and your phone may be totally defeated.


Human eye is very lousy. Only a small patch is capable of some decent resolution. That we actually perceive what we see as something sharp is a compliment to the brain in itself.


I feel uncomfortable at the ubiquitous, silent assumption that what is marketed as AI is a computer implementation of a brain.

I see how the term neuronal network reinforces this believe, but we (especially the researchers among us) should allow for the possibility that we are missing something.


Neural networks also have no ability to create new information based on their own mistakes. What is a mistake? When does something look "off" but still very interesting?

For example, you can feed a neural net all the recipes of burgers to create a perfect burger. Great. But how does the same net invent the burger?

The burger, like many foods or accidental art, was invented as a result of scarcity, circumstance, experimentation, or just fortunate error. That sort of imperfection is very hard to achieve with AI, because it is designed to be either perfect or fail.


GAN can do that. For example, AlphaZero invented strategies for the game of go from nothing but a random number generator and the rules of the game. As for perfection neither go nor chess AIs play perfectly, and they can still beat the best human players.

Of course, an AI intended to play go isn't going to invent the burger. But I see no reason why, given a list of ingredients, their properties and a model of what human enjoy eating, a neural network couldn't invent the burger.

Creating a new recipe is just an optimization problem at its core.


>>For example, you can feed a neural net all the recipes of burgers to create a perfect burger. Great. But how does the same net invent the burger?

Wait....but it just....did? It took the information about all possible burger recipes and invented a new one out of these. Like, a human could only invent a new burger if they knew anything about burgers in the first place, at the very least that it's a bun with some filling in between, otherwise you'd have no context to invent anything.


Not OP, but I think they're not talking about inventing a _new_ burger, but inventing _the_ burger, as in the first one ever.

As in, the neural net in this example is able to improvise a new burger recipe solely because it was given existing recipes to burgers as input; it did not come up with the notion of a burger and then produce a recipe that outputs something fulfilling that notion when followed.

Personally, I would argue that this distinction is not as clear-cut as the tone of the original comment seems to suggest. Humans didn't invent the burger from nothing either. We've been grilling meat and making bread for millennia, and sandwiches have been a thing for over a century.

A 'burger' is just another iteration of our biological neural nets' attempts to make food from ingredients already present in our physical reality. Given that we flow in a single direction through time, any food we make is in turn added to our list of ingredients for making food "the next time". One could argue it is only a matter of time once meat can be ground into patties and grains turned into bread that burgers start being made - given the relative benefits humans gain from consuming both.

This comes back to what others have expressed elsewhere in this thread, that the probable [most] important distinctions aren't between software vs hardware, or organic life vs silicon processors, but the environment & capacity to interact with said environment. Some sense of "innate tendency to experiment" (i.e. curiosity) is probably either equal in importance or a direct runner-up.


The burger was invented because a hungry traveler walked into a restaurant in Connecticut that was closing, and the owner had nothing but some beef and bread left. So he improvised - cooked the beef patty and squeezed it between two bread slices.

To this day they serve their burgers between two bread slices - not buns.

If you want to look it up, it's called LOUIS’ LUNCH.

AI my ass :D


I agree. I think its very widely known that our ANN’s are only very rough approximations of how the brain actually works, I think the people who say its a computer implementation of the brain are either laypeople who don’t know much about machine learning or the brain, are people marketing the hype for personal gain or people without neuroscience knowledge who have bought into the hype.

I also recently heard an argument for why our ANN models won’t spontaneously become sentient: human brains don’t learn from just observation, but also interaction. A young child doesn’t learn abouthow blocks are stacked by looking at images of stacked boxes, they learn through experimentation, by stacking boxes and seeinghow their actions affect the world around them. For an AI, that means we either need to also work on robotics so the AI can interact with its environment, not just sense it, or we need to simulate an interactive virtual environment. Some people are working on this and making great strides, but your average toy ANN won’t exhibit human intelligence in isolation, in my opinion.

Combine those two things and we’re still quite a ways away from human-like intelligence or implementing a human (or animal)-like brain.


Interestingly, there are some studies that imply that intense thinking about doing an activity (such as a gym workout[1] or hitting a baseball) can improve your physical skills than if you didn't think about it. So this is supporting the notation that you can rewire your brain by thinking, as well as tactile input.

[1] http://nautil.us/blog/just-imagining-a-workout-can-make-you-...


That’s not really what I’m referring to (or at least, only a little). Once you have a mental model of something, you can for sure think on it or build on it without interaction, but to initially set up our mental models (as children or whatever), I believe it takes interaction. Once we have a base, we can think abstractly about it and learn, but building that base..

Or, put another way, its my belief that you can “_improve_ your physical skills” by thinking, but to buildthe skill in the first place, interaction is necessary.

But even if its not true and interaction isn’t strictly necessary, I think (wrongly oerhaps) that few people would disagree that usually learning by doing is far superior that only learning by thinking/reading/listening/watching. So even if not neccesary, its at least more efficient (doing both together is probably most efficient).


Absolutely. I think what AI has highlighted is that the problem set is now looking more similar to a human experience. For example, how you train based on input and learn from failure and how limited information can confuse even a human brain (think image recognition). That said, because the problem looks the same, doesn't imply the method of processing is the same.


I am definitely not an expert on this topic but my impression is that the research is not really focusing on structured abstractions of sensory input, or making these abstractions stateful. Shapes, colours, music, and whatnot are clearly stored and retrieved in our brains, which is something NN research is not looking at (enough).


I don't know.


This article contains inaccuracies and says almost nothing novel for your average hacker new reader.


> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

https://news.ycombinator.com/newsguidelines.html

What are the inaccuracies?


The same accusation could be levied at the original artical.


Please make a specific and substantive point. Worthwhile discussions do not follow from vague and shallow dismissals.


Im with you on this. The one example of brain processing speed in the real world used with any numbers is just inaccurate (The speed of tennis balls and how able players are to react to them)

There is no analysis of the energy used by the brain to achieve anything or how much energy a computer uses for a similar task. So where is the discussion of efficiency?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: