This is fascinating to me. The list of things the author suggests the cerebellum handles is a tailor-made list of things I'm oddly bad at:
1. I'm very uncoordinated, with a noticeable intentional tremor
2. I'm particularly bad at sequencing dependencies for projects/errands/household tasks. I have to write down even fairly simple sequences of subtasks or get lost in yak-shaving loops
3. When flustered, I make very distinctive disfluencies in speech around conjugations (swapping "but"/"and"/"although") and sequencing (ie, placing objects before subjects/verbs) in sentences as well swapping relationships (referring to someone's parent as their child or vice versa, swapping "you" and "I/me", etc)
4. I tend to have a "ground up" approach to writing (building clauses first and then moving them around to construct sentences), which doesn't resemble the approach of other people I've shoulder-surfed.
All of these are fairly mild in terms of life impact, to be clear (or perhaps I'm just able to satisfactorily compensate for them in various ways) but I wonder if they all share some underlying minor cerebellar dysfunction.
I thought that the list of cognitive impairments sounds like a laundry list of ASD symptoms, and indeed, at least some researchers seem to believe that there's a connection between Autism and cerebellum dysfunction:
Often with the brain multiple regions are needed for a single function. For example planing action including movement has much to do with the prefrontal cortex and the basal ganglia is much involved in making smooth motions and inhibiting tremors, etc. Problems with the dopamine system also affect things like planing (ADHD) and tremors (Parkinson’s).
I can't help thinking that this is where humanity is heading. Given that the different parts of the brain have to compete for resources and given what the cerebellum does, it makes sense that a lesser developed one can be an advantage: It frees up resources for parts of the brain that are more important in our times.
Human society provides nowhere enough pressure for evolution to have effect (for good reasons), and our humanity timeline is nowhere near lengthy enough so far either. So I don’t think we are heading anywhere in that regards
I agree but on the flip side it is interesting to consider what were the pressures and the timeline to bring us to where we are now.
Great ape brains are distinguished from monkey brains by their larger frontal and cerebellar lobes. The Neanderthals had bigger brains than us but smaller cerebella. And, most strikingly, modern humans have much bigger cerebella than “anatomically modern” Cro-Magnon humans of only 50,000 years ago (but relatively smaller cerebral hemispheres!)
How would you explain increasing average height in the US youth in the last 30 years? It can't be food availability, because 30 years ago people weren't hungry either
Absolutely food, or at most a hormonal change based on food. Possibly with s side-order of lack of a few of the childhood diseases that have been controlled with inoculations. 30 years is WAY too small a timeframe for human evolution. That works on the order of 50-100 generations, not a single generation.
Evolution doesn't happen quickly, but selection does. If something greatly changed how we select partners the past 100 years it could have an effect within a single generation generation. What could that be? Well, maybe feminism so women don't need to rely on a man to provide income, maybe socialist policies so nobody has to go hungry, and so on, plenty of reasons why some genes could be selected out of the gene pool that was previously fine there.
Lack of disease and disease treatment. Mostly Vaccines. I got sick like 2-3 times grand total growing up, and only once any more than mildly. My parents and their parents all got badly sick more than that growing up, and both have stories about being on deaths door, e.g. my dad got german measels at 17 and was bed ridden for a month; I didn’t get my last growrh spurt until I was 19.
I'm not sure about that. Being unable to anticipate context sounds terrible for pretty much any task. Having to think through every step of anything is terrible. Not being able to form sentences fluently and instead having to arrange them like puzzles is far too time consuming. The article literally is explaining how a large cerebellum is crucial for humans' high intelligence. Reallocating resources to other parts of the brain would make us stupider.
Having no context does allow for a fresh perspective...
Having to think things through slowly step by step may reveal errors other's glossed over...
the cerebellum is important. But maybe, since we really know less about the brain than we think we do, differently wired does not equate 'stupider'. It takes all sorts to make a world.
I agree with the language part, tedious... but maybe in certain situations it might be useful.
I mean yes, people with disabilities often have a unique contribution to offer and a unique perspective. They often excel at very specific things. However, implying that it is so advantageous to be disabled that we would all evolve to be that way is ridiculous. Thinking things through slowly step by step may reveal errors others have glossed over...But it's perfectly possible for someone without a severe disability to do that. Meanwhile, the person with minimal to no cerebellum is totally incapable of stepping up to the task when, like a lot of the time, speed and intuition is important. If I wasn't able to rely on context and intuition at all I wouldn't be able to do my job, or even day to day tasks, with enough efficiency to get anything done. Furthermore, someone without a cerebellum will also have to learn to not smash themselves in the face when they're trying to eat and have to deal with constant sensory processing issues. This is not advantageous. It's a disability. So it's not going to be picked up by natural selection.
The premise that needs to be scrutinized is whether what we are deeming a “disability” is actually that. Thinking in systems, a component’s calibration (whether it is able or not) to a system can change due to the system as a whole changing (political, economic, cultural pressure).
One example is how the change from hunter/gatherer to agricultural lifestyles may have rendered the strengths of the hunter’s brain a weakness in an agricultural society.
The issue is nobody is convincing me that being unable to use context and sensory input properly would be advantageous in our current society. Because it wouldn't. And that's the end of that.
I imagine it's like performing a task in software ("big" brain, cerebrum) or hardware (cerebellum). One of them is faster and more efficient but very specialized. If it breaks you're left with the task being performed slower and less efficiently on the brain that can execute arbitrary code.
But I can't imagine any change in this split in responsibilities will happen on human relevant timescales. The cerebellum is probably not evolving very fast anyway, while the cerebrum might evolve comparatively faster but it has no pressure to do it. And "faster" still means tens of thousands of years.
In this same sense, people with no children can still help raise relatives, provide for them, or otherwise contribute for the long term success of their extended family, and thus indirectly spread their genes
Indeed, the article reminded me of the link between executive dysfunction (ADHD) and other problems like sensory processing disorders and postural sway.
Turns out studies have confirmed the overlap in these conditions and also linked it with reduced grey matter volume in the cerebellum:
What a marvellous article. One thing I’ve never quite appreciated in neuroscience is how useful physical movement is as a debugging layer. During a task, in observing gaits, tremors, speed, accuracy, etc., you’re able to gain a deeper understanding of how cognition works for non-movement tasks. I guess cognition is, after all, still just movement, but through a conceptual plane instead of a physical one.
It's been argued that most, maybe all, of human cognition is based on a sort of folk-physics mental model with objects. We throw ideas into the ring to be chewed on and distilled and maybe brought in to practice or thrown out as useless. The linguistic metaphors we use, at least, to talk about ideas and abstractions, are never more than one, maybe two, steps away from a hand moving or rearranging something.
“The linguistic metaphors we use, at least, to talk about ideas and abstractions, are never more than one, maybe two, steps away from a hand moving or rearranging something”
And the Glasgow scale is exactly that, how people physically respond to stimulus with a body movement, and subsequent complex body movements - eg just humans vaguely deciding - is how we treat for levels of consciousness.
It’s not scientific at all is the point and is very “folk-physics mental model with objects” to the OPs original assertion
I think part of this connection reflects an universal (not just coincidental) equivalence: between intelligence, (logical) reasoning and planning. Finding a solution to a logical problem is perhaps similar to finding a path in a motion planning problem. You have to consider a number of paths going from A (premises, or in general your current state), to B (a desired proposition, or in general "something that is good/satisfies you"). You have to search a path (within a heuristic list of possible paths) or plan a path of consistent propositions that in the end prove what you want (although our notions of proof and consistency are usually relaxed for practical purposes: when I say "The window was left open, and we are in the winter, therefore the room will be cold", this is not a real proof in the Euclidean sense.
Kind of like how object permanence [0] must be learned by babies, slowly worked into their internal model, even though "things don't usually vanish when they go behind other things" seems like reliably low-hanging fruit for any process (whether evolutionary or meddling demigod) to wire up as instinctual physics knowledge.
Animals like newborn deer fawns are born knowing how to walk, follow their mothers around, and run away from danger, although their legs are weak at first. So this makes me wonder if having to learn object permanence is just one more example of human babies being underdeveloped compared to those of other species of animals.
Alternatively, God made human babies superior in design from birth by giving them increased adaptability. The extra flexibility might require the specific movements to be trained, puts them behind on any specific goal at the start, and over time makes them exceed the innate capabilities of other species over time. By exceed, they might have a higher variety of behaviors or do them better (esp with tech).
In A.I., we see this with hard-coded, assembly FSM’s vs interpreters running high-level code. The former works with high-efficiency out of the box but can’t change behavior or improve much. The latter does nothing until it’s taught the extra knowledge (interpreted codes) which also might have new behaviors (functions). Many game developers switched from hard-coded assembly to interpreted code for AI agents for that reason.
So, it’s not under-developed: it’s a better-developed component with different tradeoffs.
> God made human babies superior in design from birth by giving them increased adaptability.
Holy assumptions batman. Homo sapiens aren't "superior", that's a terrible starting place for a hypothesis. There are plenty of metrics a human will never beat other species. Reaction time is a great starting place.
It's like you're saying "We humans are the smartest in the world. Smarter animals are better than other animals, so we're better than all other animals."
A giraffe may just as well believe "We giraffes have the longest necks in the world. Animals with longer necks are better than others, so we're better than all other animals."
"Intelligence" itself is not well defined anyway. It may as well mean thinking like a human.
The apex predators of the Earth who are in control of it using their brains that nothing else can match. They also regularly contemplate this using their morals, imaginations, and reasoning. Definitely far above the rest.
At least six global mass extinctions suggest to me that you're correct. However, life itself did survive global magma flows, asteroid impacts, etc. With that in mind I'd say the species that rules Earth is probably a single celled organism which lives deep underground and feeds on chemical gradients. You could blast the whole planet apart without life changing a whole lot for that little guy and his brethren.
To me it makes very little sense for object permanence to be learned rather than innate - have a look at the "contradicting evidence" in the article you linked
See the book Metaphors We Live By for more on this. (It's a little heady and the authors lost me with some of their deeper claims in the second half of the book, but the first few chapters were fascinating)
The first video linked in the OP is astonishing to me, a layperson with no medical training.
It's giving me lots to think about with regard to motor impairment. (Basically, I'm reviewing my prejudices and nodding due better understanding the plight of afflicted individuals.) [0]
One thing I’ve never quite appreciated in neuroscience is how useful physical movement is as a debugging layer.
My sister, who is a choreographer, had some interesting views on how movement could be used as therapy. (Specifically, crawling, as a base-level movement.) I thought that was woo-woo, but later I read that there was some medical support for this.
Brains are contextual. A scent might bring back a specific memory of a time and place complete with visual and auditory hallucinations, for lack of a better term. All trauma happened in the individual's past, often childhood. It makes sense that revisiting childhood contexts in any form might offer access to the regions of brain which encode the traumatic experiences. Revisiting prior experiences with present perspective and working to put them into words is the underpinnings of talk therapy.
"The cerebellum may also inspire artificial-intelligence approaches somewhat, especially approaches to robotics or other control, in that it may be be beneficial to include a fast feedforward-only predictive modeling step to control real-time actions..."
This is pretty widespread in controls, actually. The dominant control technique for legged robotics is model-predictive control ("MPC") which explicitly uses such a predictive model to determine the best inputs to the actuators.
Predictive models are also behind many of the SOTA results in modern reinforcement learning, although they are often used to generate fictive data from which a policy is learnt.
Amazing article, and for a layperson who’s been reading a lot of neuroscience, the perfect level of complexity. I love an article that makes you (internally) shout “why haven’t I wondered about that before?” over and over, so thanks for that. A compliment as to clarity of purpose, I suppose.
Materially;
That’s bad news for anyone hoping to simulate a brain digitally. It means there’s a lot more relevant stuff to simulate (like the learning that goes on within cells) than the connectionist paradigm of treating each biological neuron like a neural-net “neuron” would imply, and thus the computational requirements of simulating a brain are higher — maybe vastly higher — than connectionists hope.
I get where she’s coming from, and she’s not wrong, but it seems like unnecessary detour to dunk on another AI “camp” in the field for drama points and satisfaction - the Marcus Maneuver, if you will.
Connectionism isn’t a cult or an institution, it’s a paradigm that emphasis the utility of big nets of interconnected smaller pieces. Any self-avowed connectionist (are there any left? Honest question) could just retreat to “ok well it’s networks of brain cells plus smaller intracellular networks” and keep their paradigm. And all we can say to that is “ugh, I guess”, as far as I can tell!
At least one self-avowed professional connectionist here. I was coming to make similar critique here.
Connectionism isn't and never was about trying to simulate the biological neural networks and other anatomy. As you said, it's about emphasizing the network and network emergent phenomena over isolated pieces (e.g. "grandmother neurons" or strict localization of brain function). At the information processing level the contrast is to classicism/symbolicism that tries to explain cognition as atomic and modular operations on symbols.
Good to hear from one, thanks! My big insight when I first started really getting into ai was “we should unite the two camps now!”, only to find out we’ve basically been doing that since the 90s. So I'm glad the debate survives a bit to this day!
Honestly, after 2023 I think we’re all connectionist in a way lol, except for the old guard, of which Chomsky might be the only (adjacent) one left. Godspeed to Chomsky, I honestly wouldn’t be surprised if he has one last scientific revolution left in him
I was also stumped by this exact quote. The whole article was in the best spirit, until this.
It's a model we have, it will get updated in order to be more useful. Every engineering field builds a model of reality. Who are the connectionists? Is this some kind of "they people" label for, what causes fear in a typical layperson?
Precisely, connectionism in modern AI boils down to the idea that learning should be expressed in terms of DAGs that are composed of simpler units. It’s quite likely that the units that are currently used are too abstract, but this doesn’t necessarily mean the paradigm itself is flawed.
I don't think connectionism is restricted to acyclic graphs. Or even graphs in general. But you're right that the connectionism as an approach is more abstract than just simulating neuron behavior.
> The cerebellum has a repeated, almost crystal-like neural structure:
As a software engineer who did neurosurgery residency, my intuition/guess is that the cerebellum is kind of like the FPGA of the brain.
The cerebrum is great for doing very complicated novel tasks, but it takes time and energy. The cerebellum on the other hand is specialized in being able to encode common tasks so it can do them with quickly and efficiently. A lot of our motor learning is in fact wiring the cerebellum correctly.
This can actually lead to an interesting amnesia, where a person can learn a skill (cerebellum) but not remember learning the skill (cerebrum). So you could end up with a person who would think that he had never seen a basketball hoop or basketball before but could be doing layups, dunks, and 3 pointers with ease.
> So you could end up with a person who would think that he had never seen a basketball hoop or basketball before but could be doing layups, dunks, and 3 pointers with ease.
It just made me start thinking and then I realized perhaps another analogy is a just in time compiler where code or skills used often enough, your body manages to compile into native neurological code and stores that appropriately.
It is always funny to see brain metaphors morph to resemble our current stage of technological development, as the years go by. First it was anima, or hydraulic analogies of spirits and fluid moving through the body. Then it was clocks, the mechanistic processes of the brain. And so on and so on until today we metaphorize the brain to be like computer hardware. In vogue as well is comparing it to neural networks, due to the influence of machine learning and AI today. I wonder what metaphors we will come up with next.
I always liken this process of reality being a fractal boundary of mandelbrot and our attempts to understand it through language and metaphors as a way to approximate and fit that boundary. Consider the successive colored stripes like a updated and accurate metaphors in the following video
There are a few that have started suggesting quantum mechanics playing a large role in cognition, but very few take them seriously (obviously it has an effect, but likely much can be understood more classically, etc).
The fact that few are moving toward that style of thinking seems to give a bit more credibility to NNs being closer to the correct model. If spiking NNs take off more, we'll probably see more arguments around that, and if Blue Brain's full in-silico modeling takes off we may see the succinct description given by those studies used to describe ideas. However, to first approximation, NNs and spiking NNs aren't really a bad way to reason about large descriptions of brain dynamics, in many circumstances.
There’s zero evidence that there’s anything more quantum mechanical about the brain than a brick. IE: Physical and chemical interactions that emerge from quantum behavior, but can be modeled just fine without QM.
Instead people seem to just equate two different complex things they don’t understand with each other.
Though it's not like we're flitting from one bad analogy to another. Hydraulics are a great metaphor for understanding how computers work, for example.
I always thought neural networks were an example of the analogy working the other direction. Instead of modeling our brain on the technology of the time, we chose to model the next technology on how we think our brains work?
Heh not sure if you're aware, but our brain seems to have a special treatment or logic for contextualizing "high technology", as indicated by one well-documented failure modes: the "influencing machine" is a feature of schizophrenia which features a delusion that contemporary high technology (magnets, pneumatics, gears, mind-control drugs, satellites, prob AI now, etc) is being used by mysterious attackers to control the sufferers body and mind:
https://en.m.wikipedia.org/wiki/On_the_Origin_of_the_%22Infl...
In all honesty, I believe the reverse is true. Our technology seems modeled after humans and the environment we inhabit. Airplanes being glorified birds, wheels being glorified feet, computers being glorified brains or neural networks...well.
I did. They still don't have anything in common other than being means to locomotion across the ground. Unlike, f.e. cameras that are directly inspired by the eye, airplane wings being directly inspired by bird wings. Wheels are not directly inspired by anything biological, they are probably invented from log-bearings for easier dragging of heavy things across ground.
Both are spot on examples on what the cerebellum does. If I may, a third example/analogy that comes to mind is cache memory or L2ARC drives, at least that’s how I have it stored in my mind (pun intended) :-)
"The brain is like a computer that"-style analogies are rarely fitting, or so vague as to being almost useless. My fridge is an L2 cache for food I want to eat soon.
It's an analogy for a reason. It bothers me when people combat analogies so incredibly hard. Of course, it is not really fitting or the same thing - it's an analogy.
It would be a useful analogy for someone intricately familiar with computers but who was only sort of vaguely familiar with the concept of eating, has thought about houses only on occasion, and knows about refrigerators only insofar as they’re a food-related thing inside a house.
This actually happened to me when I lost most of my memory in an accident. I couldn’t even remember my job tasks enough to describe how to do them. They were repetitive, though, with it all in my intuition or muscle memory. I could do the stuff without knowing why I was doing it. My friends and I joked I was the Jason Borne of the place. It’s a strange feeling, too, because I could feel like something was missing as I acted.
I was also a supervisor in a competitive company with people gunning for my job. The Recession was not a good time to be disabled. I feared demotion or termination. I hid my injury while trying to get back in mental shape. They chalked up occasional forgetfulness to the stress we were all under and my constant partying. (That was before I was in a relationship with Jesus Christ and gave up those sins.)
Eventually, one of my recovery strategies was to take note of the specific things I did on instinct, think about why I did them, and re-create the mental models. I’d also just ask people how they did things and what they learned works best. Many were people I trained with my prior techniques who re-taught them to me. By practicing those, I both re-learned my mental model of how to do the job and connected it with the instinctual wiring.
There were one or two other trucks that helped. That was the part relevant to your comment, though. I never quite got back to my old level of performance. Capitalizing on how intuitive memory is different than conscious memory has helped me in many places ever since. I just keep breaking it into simple things that I repeat over and over to. Then, keep applying those pieces in new ways to keep my brain fresh.
Yeah, the discussion of classical conditioning led me to the same sort of conclusion. The fact that cerebelum has been growing faster in human like primates as a percentage of our already larger brains, well, I can't help but think that all our social reactions, drives, and complex needs are essentially some kind of cooption of this FPGA for optimization purposes. Like cerebrum does training and cerebelum et al do evaluation.
Neurosurgery residency is very, very, very intense. Unfortunately not everyone finishes. When I was in medical school, I remember some general surgery residents quiting after falling asleep in the middle of an operation; another neurosurgery resident I rotated with was pretty miserable, I found out later he quit. I would have liked to be a neurosurgeon, but simply didn’t have the physical stamina.
I ended up becoming a radiologist. Never heard a radiology resident quiting, although have seen a few residents get kicked out for mental issues or gross incompetence.
Undergrad/premed: live with family, have 100% 24/7 familial support of your education, living at home with all of your essential basic living needs taken care of.
Residency: move to a different location away from family, no longer living in a dormitory environment with the expectations associated with being a student but are now a real adult making your way in the world. Suddenly you have to make the whole package work on your own without laundry/cooking/mental health/financial support.
Now you can no longer put 100% of yourself into your studies, but instead can only manage the 60 or 70% that most people can muster when they have to actually maintain their physical existence while also meeting their professional expectations.
It happens. Often incompetence is specific to one specialty - neurosurgery is competitive, so you can assume that anyone who gets it has at least adequate grades/test scores. But that doesn't mean that they're clinically worth a damn.
I'm an anesthesiologist. There are people who wash out because they just don't have the temperament for it. They're not dumb, they're not even bad doctors, they just aren't mentally equipped to sit back and relax while running a code.
Obviously not OP and not in this position, but I have worked with people who left surgical training positions and their reasons were health and a realisation that they would miss every family milestone, never get a real break and have every part of their life revolve around their job with the money and god-like power not compensating for that.
Obviously that’s one side of the equation, I don’t have any surgeon friends I know well enough to give the opposing view.
OP said "did" implying finished. It's a six year residency minimum, though the first year is general surgery. It's not often people do the whole damn thing, then decide to bail. Usually it's after 2 or 3 years
Though some people are less burdened by golden handcuffs and sunk-cost fallacy
Plus, I've never met a happy (or sane) neurosurgeon
I'm in no way even close to being in the medical field, but I could see it as an unreconcilable dichotomy between the hippocratic oath and the fact you're going to be causing damage no matter what.
Sure, the hemmhorage needs to be fixed, so you're preventing further damage, but every cut may cause unknown ramifications. Anyway, I'm postulating a neurosurgeon would be aware of this, and have to carry that around with them.
Medicine as a career offers immense personal fulfillment, variety, human interaction, and prestige at the expense of dealing with difficult outcomes and ranges of personal sacrifice -- neurosurgery as a specialty just takes all of these to their extremes.
I value the former and find ways to discount the latter. So I am very happy. Though sane or not would be up to others.
$800,000 is a lot of money, even after taxes you'd only have to work a few years at that salary before you could live reasonably comfortably for the rest of your life without working at all.
Seems perfectly reasonably to switch to a lower stress career at some point.
People do change, especially when exposed a long time to stressful environnement. Ask post-burnout fellow. Fortunately most re evaluate their life before going to burn out.
Assuming they only got as far as their residency (and didn't end up as attending physician), it's possible that they didn't see themselves spending a full 7 years as a resident doctor (making under $100k/year working 80+ hour weeks) only to spend the rest of their lives doing more of the same except with a much higher salary. If they already graduated their residency then the reasoning is the same except it's be a much harder decision because of the sunk cost.
Do you think this could be responsible for some part of "muscle memory"? Sometimes when I switch to other identities (DID), they can forget steps. That presumably happens because those steps are automatic for me, so I don't have to think about them, but when others try to do the same thing (not think about them), the automatic thing doesn't happen, and they end up missing the step entirely. They have to remind themselves to think consciously even about things that are normally automatic for me, because they don't have the same muscle memory.
I also wonder if neurodivergency affects this region. I'm autistic, so my brain is detail-oriented. Sometimes it feels like I can perceive "neural circuits" that are implemented by the so-called FPGA. When I have a compulsive behavior or trigger, I can sometimes observe the entire execution flow, not just the result. I think that's neat.
> That got me interested: since the wiring is so long (from limbs to cerebellum), what kinds of motor learning?
I'd imagine it's things like training a dominant hand. The skills required for precise motor control, to produce the right movements for e.g. handwriting. Since the wiring is so long, and feedback is delayed, you need to be able to precalculate these movements.
Also imagine how e.g. an intent to move somewhere actually gets implemented. You don't always have to think about each individual step of walking, or pay explicit attention to things like your sense of balance. You probably don't even have to choose that you're going to walk, or think about how to get up. When you want to go somewhere, you just do it, and somehow it's all calculated for you and happens.
When you try to move a specific limb, how do you know which muscles correspond to that limb? In fact, how many of those muscles can you even individually address? You can learn to individually address them, but I bet you don't come with that ability by default.
Then of course there's the question of what even causes your limbs to move once you will them to move.
What do you mean? The cerebellum is closer to the spinal cord than the rest of the brains. And there's no learning happening anywhere but the brain, vertebrates don't have distributed central nervous system like octopuses do. The only thing vertebrate limbs can do on their own are certain hardcoded reflex actions.
There is processing in the autonomous nervous system, all sorts of regulation, and adaptation, perhaps something that could be called learning – we don’t really know what happens in the gut nerve complex yet, as it’s so recent "discovery".
Fully autonomous muscle movements like heartbeat, or the peristaltic motion of the gut, are not directly controlled by the brain, but they certainly respond to signals sent by the brain – eg. brain processes sensory data, determines there’s a danger, upregulates the pituitary gland which starts secreting adrenocorticotrophic hormone, which once it reaches your adrenal glands, causes them to start producing adrenaline, which in turn causes all sorts of changes all over the body related to the fight-or-flee reaction.
The skeletal muscle, though – I don’t think its innervation is capable of anything but reflexive motion without the brain telling it what to do.
Out of curiosity, having the kind of weird symptoms not far from what you describe as weird amnesia, do you know any books / resources to understand advanced brain neurology like this ?
Psilocybin and other psychedelics may be that exogenous agent, they release Brain-derived neurotrophic factor (BDNF) [0], which plausibly could cause “rewiring of the cerebellum” [1][2], and may even do this with sub-perceptible (micro)doses [3].
Any signal, that's the point. The cerebellum learns the patterns of signals involved in motor control.
This is why you train your skills by doing the correct movement over and over again. Once the cerebellum has adjusted to the correct motor signal patterns the correct movement will become effortless.
You'd have to apply an adverse stimulus in under a ~5ms threshold to actions that were 'wrong'. It would depend on the exact task you're trying to do though. That would then cause other areas to potentiate that specific movement/firing as incorrect.
Its a active area of research in sports and DoD. As you'd theoretically be able to train marksmen and athletes at a much faster and better rate. However, even really really fast computers aren't quite fast enough to apply the adverse stimulus to 'wrong' movements/firing.
Also, your computer better be really accurate and never mess up, or that person is going to have a hell of a time retraining their brain. Also, their brain may view that the clouds/temperature/itchy grass/breakfast are the reasons for the adverse stimulus, as this is all happening at in subconscious time frame. So, good luck there.
My overall read on this article is that its claims are probably overconfident. Like, it seems interesting, but like she's seeing a few results and making big claims about how the cerebellum plays into overall cognition, and my general sense is that lots of humility is usually warranted here: that simple and decisive statements usually turn out to be riddled with provisos and unexplained behavior.
You are correct. We don't really know what's going on. Every claim can be met with an equally emphatic opposite claim with equally compelling evidence by someone cherry picking the "correct" studies and listening to the "correct" people. What we call neuroscience is still in a pre-Newtonian era.
After sharing this piece to a neuroscientist coworker, I got the same feedback. Interesting article, but should probably be taken with a grain of salt when the author interpolate from studies.
I missed this, but after reading Steve’s comment, I don’t see much in his “little time machine” theory of function that conflicts with the original article’s ideas, except on the classical conditioning point.
That's an interesting perspective. While I agree that the pace of advancements in neuroscience is slower compared to AI, I think it's important to note that understanding the brain is a fundamentally different problem than building intelligent machines. The human brain is an incredibly complex system with billions of interconnected neurons, and we still have a long way to go in terms of fully understanding how it works.
AI, on the other hand, is designed to solve specific problems efficiently, and it can be engineered to mimic certain aspects of human cognition without necessarily needing to understand the underlying mechanisms.
While it's possible that AI could eventually help us better understand the brain, I believe that advancements in neuroscience will continue to be crucial for unlocking the full potential of AI. Understanding how the brain processes information, learns, and makes decisions could lead to the development of more sophisticated and human-like AI systems.
I like the beginning of the article quite a lot for giving an overview of the cerebellum, teaching that it is the home of unconscious learning, but to me it goes into weak speculations quite quickly, first with the 'Purkinje cells learn individually but not other found do that' leads to 'neuron connectivity is not enough to simulate' brain activity while knowing that higher level mental activity exists without cerebellum. Also that cerebellum might be the home of measurement just beacuse Purkinje cell can time a reaction, then based on the headlines (lost interest to attentive reading) speculating that it is the place for anticipation and sensing. Having a feeling that wants to expand cerebellum onto as much as fantasy stretches. The whole cerebellum topic sounds fascinating without 'rethinking intelligence' completely.
The brain is not like a neural network where the only thing that is “learned” or “updated” is the weights between neurons. At least some learning evidently happens within individual neurons.
That’s bad news for anyone hoping to simulate a brain digitally. It means there’s a lot more relevant stuff to simulate (like the learning that goes on within cells) than the connectionist paradigm of treating each biological neuron like a neural-net “neuron” would imply, and thus the computational requirements of simulating a brain are higher — maybe vastly higher — than connectionists hope.
I had heard this as well close to ten years ago on some NPR radio show: That researchers had reasons to suspect that a whole lot more processing happens within the synapses themselves.
Perkingee, when you read one orthography with the rules of another orthography. Curiously, also happens to "Czech" or "Czechia" itself which I've been shocked to learn some pronounce chechia instead of checkia, explaining the baffling confusion with Chechnya.
My husband first symptoms of Parkinson’s occurred during covid, but was diagnosed in 2021 when he was 61 years. He was on Levodopa- not crazy about it! he was also on Sifrol and rotigotine not crazy about any of it either, The Levodopa did very little to help him. The medical team did even less. His decline was rapid and devastating. He had a stooped posture, tremors, muscle stiffness and even slow movements. I was a master Gardener and love herbs! This Parkinson’s took my life from me, I was no longer able to work in my garden anymore because I was a full time caregiver for my husband. We stopped most of his Parkinson’s medications due to severe side effects and I started him on Ayurvedic treatments from Natural Herbs Centre naturalherbscentre. com , the treatment has made a very huge difference for him. His symptoms including body weakness, tremors and slurred speech disappeared after few months on the treatment. He is getting active again since starting this treatment, he is able to walk and able to ride his treadmill again.. This Ayurvedic treatment is a miracle!! I recommend researching natural treatment for Parkinson’s disease versus taking the pharmaceutical drugs that only deal with specific symptoms and have damaging side effects to your body.
So classical conditioning in humans requires special cells in the Cerebellum (Purkinje cells), which can even do single-cell learning. Which artificial neurons can't do, as only weights (artificial synapses) are updated. So how is classical conditioning actually implemented in artificial neural networks? I assume there is some minimum network which makes it work.
I always assumed ANN simply is a universal and learnable function approximator. That is, there is no direct equivalent of classical conditioning. Only data in, expected output pairs
There must be a minimal ANN architecture which implements classical conditioning. This architecture could be quite limited in what it can learn compared to ANNs in general. Similar to how feed-forward networks are limited compared to RNNs.
You can train single layer neural nets. Not very useful, but they do exist.
There are certain ANN architectures that relied on, essentially, classical conditioning based on Hobbian learning rules and variants thereof. Kohonen self-organizing maps are an example of that.
Not that such historical systems are popular today, though.
Well classical conditioning kind of only makes sense in the context of an agent that is receiving inputs and taking actions on them. Many neural networks don't solve problems of that type, and so have no need for classical conditioning.
But when you do have such a problem conditioning is not very complicated. The normal algorithms and neural structures are designed to learn stuff like "when a given input happens a certain action must be taken" and thats all you really need for conditioning. How it actually does it? Well I guess with gradient descent it would work something like this: Every time there is a puff of air the network will be like "damn I should have blinked to avoid this" and so it makes its current internal statement a little more likely to lead to blinking. Gradually as it happens more times it will learn a strong association for the ringing bell or whatever.
Yeah. It's just not quite clear what the minimal example for such a network would be. I assume you have N inputs and one output. The output always active when input 1 is active, otherwise the output is inactive. So the other inputs are ignored. However, when one of those other inputs, x, tends to be temporally correlated with input 1, after a while x will generate an output upon activation even if input 1 isn't active. If x becomes decorrelated with input 1, x will again get ignored. Not sure what the simplest network architecture looks like that implements this behavior.
> special cells in the Cerebellum (Purkinje cells), which can even do single-cell learning.
As a neuroscience novice, I've always assumed that something about the gross model of the neuron, as far as I understand it, cannot be correct or is incomplete. Because I never understood why single cells aren't already performing single cell learning, given that there are always far more dendrites than axons.
Since this characteristic turns each neuron into a lossy compression function, there has to be some process by which certain dendrites are considered 'more important' carriers of information than others, in order to make a tie-breaking decision about what to include in the compressed signal, and what to throw out, as the cell decedes whether or not to transmit an impulse (including whether or not to override prior inhibitory signals) back up the axon.
I think it's possible. When you get down to the biology and chemistry the electrical action potential driving all this results in ion channels opening and closing on both sides to pass on the signal. I totally believe individual dendrite-axon connections can lose or gain strength over time based on an optimization of the efficiency of these ion channels.
I think this is above and beyond a simple mathematical weight and the nueron structures around these dendrite-axon pathways change or express their genes differently over time.
Well, not all incoming signals get the same weight for the outcoming signal, as the dendrites are e.g. more or less close to the part of the cell where the spikes are generated. But this computation is just analogous to the connection weights together with the activation function in artificial neural networks. That's not what enables classical conditioning in single Cerebellum cells.
At a rough guess, each Purkinje cell is an MLP unto itself, and as the article states, this implies some orders of magnitude more computation for a brain simulation. I also heard something like 'a neuron is an MLP unto itself' on the Brain Inspired podcast. It's likely we've vastly underestimated the processing power of the brain.
They probably aren't, at least at the level we often teach. Much of our knowledge about the brain comes from observing people who have pieces missing and seeing how their behaviour differs from a normal adult or putting people in an fMRI scanner and saying "wow, that area used a lot of oxygen compared to baseline". This, and a scientist's nature to classify things, led to a lot of overoptimistic categorization of brain function to specific regions. As neuroscience has matured the field has grown to recognize a more nuanced view that most computation in the brain is more distributed than we first assumed, and different areas are often involved in overlapping functions. It can also change over time or after extreme brain trauma. But it's not correct to say it's fully distributed either. The honest answer is we still have an extremely poor understanding of how the brain works.
My guess: because it's useful with following analogy.
We use text embeddings to represent concepts written words. They lack the nuance, when the same word has different meaning in different concepts. LLM's use text embeddings and enrich it with the Attention mechanism.
For words, that in reality are used to represent a single concept, a text embedding is working perfectly on it's own.
For those concepts that are context dependent, we're using the attention mechanism to gently guide the text embedding closer to the intended, identified by surrounding words, meaning. That's the role of the Value vector of the K, Q, V triplet in the Attention mechanism to be precise.
So, this is a simplistic approach, which corresponds to the "first approximation", which could be good enough for some cases. We don't know which exactly yet, but we'll know, once enough evidence is given to the contrary.
It's not a good model, but a very good approach in order to do research in a stepwise manner. With time, it'll get more and more nuanced, one approximation at a time.
Even if you are a full materialist it's fallacious to assume there is one "part" that does something, like it's a factory assembly. Instead it might be a function of the composition of brain parts.
It's like asking which part of a bat makes it fly. The wings? Well kind of, but you need more than that. I guess you can fly without feet... it's just not a well formed question.
If you ask question a bit differently, then it's not a mystery at all. Why brain parts that have neural structure, which is conductive to fine and agile motor control, perform motor control?
> Even if you are a full materialist it's fallacious to assume there is one "part" that does something, like it's a factory assembly. Instead it might be a function of the composition of brain parts.
Its true that a particular function may mot be localizable more specifically than the brain (or even the whole body), because defining it as a distinct function may not reflect the organization of conponents within the body. But that's still performed by a defined physical system, and there are still sub-functions necessary to perform that function that are localizable to narrower components.
> It's like asking which part of a bat makes it fly.
Its like that in that we absolutely can describe specific parts of the bat and what each contributes to flight.
> In total, the cerebellum contains 80% of all neurons!
Apples and oranges, but that's so reminiscent of MLPs in Transformers. A similarly large fraction of the weights in transformers come from the MLPs in each layer.
> While humans don’t have these kinds of sensory systems
(He's talking about sensing the 3D environment using electric fields)
I wonder whether binaural hearing is such a sensory system. You can blindfold someone, then lead them into a space. They can tell whether they're outdoors, or in a small, bare room, or a concert hall, or a room with furniture and drapes. Perhaps they can tell whether they're near or far from a wall, and in which direction.
A sighted person who is not blindfolded can do the same thing, relying on reconstructing their 3D environment from the way electromagnetic radiation affects the rhodopsin in their retinas rather than the movement of hairs in their cochlea due to air pressure changes over time, and integrating the differences between a spatially separate pair of detectors.
“It’s a big job, but it’s not easy.” - Is this a problem? The way I understand this sentence is that: "It is a big job, and you might be excited about it, but it is not easy".
When talking to someone with whom we have a good rapport and good context, this conversation can be done faster.
Basically, I/O is slower than the CPU.
Some of the other sentences can also be explained in similar manner.
This article made me wonder if dyspraxia was related to impaired or inhibited cerebellum function. A cursory search yields at least one article supports the idea:
> Results revealed that children with DCD had reduced grey matter volume in several regions, namely: the brainstem, right/left crus I, right crus II, left VI, right VIIb, and right VIIIa lobules
Timing is also needed in speech, which is quite fast compared to conscious desponse; so the problem with conjunctions might just be that the cerebellum anticipates them.
Note that time perception is distorted so that we don't notice how slow conscious response is.
So the feeling of "competence" is when your cerebellum is anticipating correctly.
Not really, but it can be involved for some tasks. 'Muscle Memory' is a bit of a complex thing. It's not so much the firing of the neurons, as much as it is the timing of that firing. Your reaction time is at the ~5ms level. Much longer than the muscles need to move in concert to, say, hit a 3-pointer. Controlling all of that can take place all the way from the brain down to the ganglia of the spinal chord. Drinking a cup of tea while reading will mostly take place before the brain gets a chance to intervene, for example. While riding a bike will involve more of the brain. I want to stress, it's a complex and not well studied area of active research.
There is not, it would just be general neuroscience. I'm unaware of specific labs either. Google would be your best friend in terms of trying to find specific researchers and in reaching out to them.
I’ve many times had the experience of trying to debug someone’s computer problem, and trying to describe how to fix something, I couldn’t think of what to do in words. So I said, “my hands know where the answer is” and once I had the mouse I clicked around and did the task fairly quickly. I wonder if that was the cerebellum solving the problem for me?
I'm betting a higher frequency, lower latency, high throughput dedicated part. The slow stuff thinks for longer and sends a "goal" to this part, which translates that into a series of higher frequency (comparing to the slow part) activations, which are fast enough to produce fluid movement. Probably in the order of few milliseconds
Come on, no one knows... after 10 years in computation neuroscience + experimental neuroscience.
The slope of neuroscience advances in very very low (some will say negative).
The slope of AI advances in much much higher.
--> we will get an AI to understand the brain an explain to us, it will not come from a lab.
Kandel is superb but it's written for grad students and advanced undergrads with a solid biology foundation. A typical undergrad neurosci textbook would be an easier start for a non-biology person.
>"ah yes, the thinking happens in the cerebellum.”
Why do we NOT think so?
It must be capable of thinking alone, as only mammals have the neocortex. It would only be logical to expect the more universal cognitive abilities to happen in the cerebellum, and only those that are specific to mammals alone to happen in the neocortex.
1. I'm very uncoordinated, with a noticeable intentional tremor
2. I'm particularly bad at sequencing dependencies for projects/errands/household tasks. I have to write down even fairly simple sequences of subtasks or get lost in yak-shaving loops
3. When flustered, I make very distinctive disfluencies in speech around conjugations (swapping "but"/"and"/"although") and sequencing (ie, placing objects before subjects/verbs) in sentences as well swapping relationships (referring to someone's parent as their child or vice versa, swapping "you" and "I/me", etc)
4. I tend to have a "ground up" approach to writing (building clauses first and then moving them around to construct sentences), which doesn't resemble the approach of other people I've shoulder-surfed.
All of these are fairly mild in terms of life impact, to be clear (or perhaps I'm just able to satisfactorily compensate for them in various ways) but I wonder if they all share some underlying minor cerebellar dysfunction.