My bank in LT requires proof of income, and my income is lower than minimum, because I am a student of CS so far only. And i do not need 6 month or more to return it, I may return in a month.
He could have tried injecting senolytics into the weared disks to remove senescent inflammatory phenotypes, as there is some research in senolytics (dasatinib+quercetin) improving arthritic knee joints by removal of senescent inflammatory cells which allows the new cartilage to grow.
There is no information on what do they actually measure in every individual. Without some real-time feedback it seems somewhat not enough basis to deliver the treatment to the intended target. (But if they're hiring...)
What do you mean "there's no information on what do they actually measure in every individual"? We are measuring slow-waves, the synchronous firing of neurons which defines deep sleep. This is a real-time feedback system, with measured ERPs on a 5-second on, 5-second off stimulation protocol?
Happy to answer any questions, if I'm not understanding your statement correctly.
That's ok. There just seem to be no explicit mentioning of that, that individual differences in real-time will be taken into account, just some textbook style description.
Have you considered regulating skeletal muscle tone?
Ok, thanks for the feedback. I thought our "how it works" was pretty clear, but also simple for people to understand. It's a narrow path to wander.
By "regulating skeletal muscle tone" do you mean as an input? Or as a target?
We've mostly focused on neuro, though we did an experiment in Vagal stimulation, but you could never be sure what you were measuring, so we ditched that and focused on the area with the most research.
PTAS (the technical term for the stimulation) has considerable amount of research behind it with impactful results.
Always keen to learn more if there is something in muscle tone you think we should be looking at.
Reduced skeletal muscle tone during sleep is pretty much established. If you find out, that there remains some increased muscle tone which correlates with insomnia or reduces the efficiency of PTAS, you could target it. Though you're already targeting it indirectly via delta waves.
Yes, reduced muscle tone is absolutely established, now I think I know what you're saying. We're often letting the researchers guide the areas they are most interested in. Lot's of interest in Alzheimer's, Parkinson's, and depression. We're interested in looking at insulin response and other metabolic factors.
If 40 times a second is 40 Hz, than this frequency is pretty high for a deep sleep (in humans), 40Hz would mean some intense pattern recognition/focused attention.
>>When we say voltage gradient, think the traditional ions and the like. But also think of the voltage gradient that a protein can have too, with binding pockets and stuff. Think voltage gradients that are held in place by lipid rafts on the membrane too. Think also the osmotic potential that ion concentration will have, not just the raw total voltage of a voltmeter. There are a lot of components, and therefore gradients, that make up the voltage potential.
It seems that both Claude and you use "voltage gradient" and "ion gradient" interchangeably, which may be not technically correct enough. In electrical engineering voltage = potential = charge difference btw 2 points = the driving force that drives a current to "flow" from a point of bigger potential to a lesser one (typically). Thus it is voltage (or a field) that will drive an ion or any charge gradient.
Yes, ions and charges will flow in Bio too, but that flow is generally restricted and used somehow. Nature is always finding a way to take a toll. Cells will also set up a voltage difference to accomplish things too, all on their own.
Like, cells are quite happy to make massive charge differences (for their size) and then use that to do some little thing. Generally, they use ATP to skit around those pesky little entropy issues to act like Maxwell's Demon.
Like, they are using ion flow/current and deciding how that will benefit them. They gate it, on and off, to induce all kinds of signaling and meiosis and energy transferring.
So, in Bio, a voltage/ion gradient isn't really thought of as the same way in EE. Like, we care about 10 or so K+ ions, that little of a difference can do all sorts of things for a cell. ANd the voltage potentials can be titanic, because the distance are so small. That Van Der Wals force man, you don't think it do, but it do.
One important and subtle thing I learned going from my engineering/physics background and into bio is not assume that cells are little micro machines. They are in fact alive. They study you back, in their own limited ways. They try very hard to stay alive too. So, when bio people talk about cells doing things, we really do mean that they have agency.
I read one neurosurgeon (developing a theory of quantum biology) tell that mitochondria can develop voltage potentials comparable to a lightning bolt. Then searched a bit in PubMed and found something like still up to a couple of hundreds or a hundred milliVolts.
But I was curious, what do you think about the ways by which ligands find their receptors inside or outside cells in a dense bioelectrical and biochemical environment (as described here [0]). When I asked on stackexchange, they gave me a link about gradients and concentrations, but my question was about the very beginning of ligand's effect when it needs to find and activate at least one receptor. And no receptors seem to be able to "sense" a piece of space with a ligand's concentration, as they need direct binding of a ligand, but before this how does a ligand find a way to the receptor?
This may differ whether its a small or large molecule ligand, but my ligands of interest are ions (Ca/Mg, Na, K ,Cl; Li), peptides, anticancer drugs with metallocomplexes, ion channel drugs and similar drugs.
I think you're asking: how does some random bit of protein/ion/drug find it's way to a receptor for it? Is that correct?
Stochasticly. It's all random, as far as we can tell.
The things that make it all work, though, are the large amounts of receptors and binding thingys, the very small spaces, and the temperature. The cell is really kinda jam packed with stuff. But, since we're at ~97F or so, things bounce around a lot. The key here is the mean free path. Depending on the thing you're looking at, the mean free path of that thing is generally sufficient to get the pieces together to party. If not, then you start getting into really complex and hyper specific transport mechanisms. Each of those is going to be it's own little research world and will have little broad applications.
With large molecule drugs, you're likely using some clever transport mechanism with cleavages and digestion steps along the way. These are really some marvels of bioengineering.
With ions, you're just doing simple diffusion modeling, and the body very tightly regulates these ion concentrations
With peptides and these 'medium' sized things, you're looking a combination of diffusion and some hacking of the cell's machinery.
Again, I want to stress something here. We're still on the cusp of really understanding biology as a species. This stuff isn't EE. We're trying to unravel ~4 billion years of random-ass evolution, it's going to take a few thousand years for us to do that. Neither you nor I will see biology as a mature science.
Yea, that's correct. Though I may probably omit proteins and large moelcules, requiring transport vesicles and any specific transport mechanisms.
Stochasticity sounds like there has been performed some theoretical modelling to infer this. But does it imply that there would be some tiny % of any ligand molecules - endogenous or exogenous - which would just by chance get "an empty run" and didn't bind to their receptors (though structurally they're fine ligands with high affinity) and would be removed via waste removal systems? Is there any experimental evidence for this, like some study using radiolabelled high affinity ligand molecules to see what % of them gets into "an empty run"?
The mean free path seems sort of sensible in the extracellular space, though it still seems that the variables affecting mean free path (large amounts of receptors and binding thingys, the very small spaces, and the temperature) may be not enough. But wouldn't mean free path be near zero inside cells, where every nanometer should be occupied by some other biochemical pathway/reaction or bioelectric activity?
>>Neither you nor I will see biology as a mature science.
I personally wouldn't care a lot about proving anything to anybody in some absolute sense, but first of all to prove instrumentally and make stuff work for myself at least. I think that any biology student with the descent understanding should have some mini lab for personalized medicine (as e.g. Sinclair mentioned that his recent research on using 6 chemical compounds for OSK epigenetic reprogramming (rather than bulky viral vectors) can be done by any biology student).
Oh yes, many studies on unused targets/receptors are out there. It's a very common thing in the cell. Sure, yes, there are a lot of transport mechanisms to get the higher Dalton things about. But, again, it's all kinda random down there. Look at a lot of synapse regulation and you'll see that signaling molecules will escape the cleft and have to be digested. There's this really fun 'dance' that astrocytes do to regulate damaged NMDA receptors (and likely all receptors) that kinda makes the synapse just spill out all the signaling compounds for a little while. The cilia in neurons will also act as a kind of passive radar for a cell, just taking in signals and seeing what is going on with all the unused stuff floating about.
The mean free path is pretty much 0 all over, so to speak. I was just trying to tie it back into more EE concepts for you. The idea is that things are just randomly moving about, with a 'free' mean free path, until they aren't, and that stoppage costs energy. At body temperatures, it doesn't take much to knock binding ligands out of a cleft. So the stiffer the bind, the harder to disassociate, and the harder to get it to unbind at the end. Nature kinda figures this all out on her own, and the optimal energies are found out via evolution. It's all a 'good enough' system.
So, the trick with bio is that it's a lot like how Clausewitz thinks of war: War is easy, it's just that all the easy stuff is really hard. In that, it's conceptually easy to do bio. It's just that it's really hard to implement anything. Feynman talked a bit about it in one of his lectures. In that, getting a rat to randomly go into a room and then discover that there is cheese in it will take a tremendous amount of prep and careful cleaning and the like. Rats have really really good noses. It's so easy to fool yourself in bio, because the systems are just so complicated. And, for me, that's been true up and down the size scale, from single cells to whole animals. The systems are just so complex, you really only get to ask simple questions and then hope you controlled the experiment correctly.
All the empirical examples you mentioned pertain to the extracellullar space. So is this stochastic modelling also true in the intracellular space, which is like 100x times denser structurally, biochemically and bioelectrically (given that all biochemistry is effectively a type of electrical process involving very refined transfer/manipulations of charge densities), and allows to explain how do hundreds or thousands of biochemical reaction inside cells happen as required without interfering with each other?
Evolution also "tries" to save energy anywhere possible, so spending energy on the synthesis of endogenous ligands, which eventually will be discarded, seems a bit redundant. There is also a theorem in evolutionary game theory, that probability that natural selection will allow an organism to see reality as it is (=the truth) is exactly zero, as it's enough to make it just "good enough". I was arguing about that with Gemini, and it agreed with me. My point is that "evolution" is just a tool (like ChatGPT) with it's own instrumentally limited pool of empirical data (80% of which was also obtained from macroscopic enough observations rather than reverse engineering or experimentation) to build upon.
I actually want to apply one EE concept, which has some experimental basis. The reason why I am digging this, is that I am searching for some possible explanations of a couple of dozens of experimental studies in bioelectrics/magnetics I found. (though won't discuss in depth on a public forum)
I mean, how is the intracellular space denser than the extracellular? That means they wouldn't float.
The stochastic nature of the cell, as far as I know, exists pretty much the same in and out. With more transport mechanims occurring inside to make sure things get to where they need to be.
It's not that the discarded ligands (for example) are really 'discarded'. There are a few instances I know of that use the 'waste' as a product unto themselves. The ToR network comes to mind here. Still, trying to really figure out what the 'intention' was all those billions of years ago is hard, and networks and feedback loops have been built up over the eons. Like, yeah, nothing is really wasted in a cell, per se. But it can seem that way in the chain that you're looking at.
I'd love to know more about the magnetic side of things here. Is it memristors as synapses? Because that is a criminally misunderstood area of neuroscience.
>>how is the intracellular space denser than the extracellular?
Gemini:
```
Yes, the intracellular space is denser than the extracellular space:
Here's why:
Packing: Cells are packed with molecules like proteins, carbohydrates, and nucleic acids. These molecules take up a significant amount of space within the cell, leaving little room for just water.
Solutes: The intracellular space contains a higher concentration of dissolved molecules (solutes) compared to the extracellular space. This contributes to a higher density.
Extracellular Matrix: The extracellular space, on the other hand, contains a looser network of connective tissues and fluids like interstitial fluid. This allows for more space between molecules, resulting in a lower density.
```
>>Still, trying to really figure out what the 'intention' was all those billions of years ago is hard
With this logic you'll need another billion of years to randomly figure it out. I'd rather focus on how/efficiently does such position contribute to a specific current experimental methodology or results.
Yeah, sure cell densities vary (fat vs muscle) but pretty much any cell sample you're going to gather is going to be near the same density as the surrounding water environment. Again, there is a lot of variation though. The end result is that the density of a cell is near enough the density of water, it's not 100x more dense. I mean, iron is only ~8x more dense than water.
100x was a demo, not an actual number. But please explain how does intracellular content with DNA, RNA, proteins, structural organoids and all of these metabolic constituents [0] is supposed to be the density of water. You want the cells in an endotelium of a blood vessel to float, allow the blood to get into the wall of the vessel and get hematomas and hemmorhages?
Thank you so much for your explanations - I have learned a lot. I have some more questions but also have about 40 tabs of papers and terms to digest first, and I think the thread will probably go stale by then. May I ask what you studied and how you came into this type of knowledge from an engineering background, and whether you'd recommend any recent texts to come up to date on this stuff with?
I did a career change from particle physics to bioengineering and neuroscience ( and now AI/ML for EE applications with bioreactors, but that's another story).
There's not a lot of recent texts really. US based academia in the last few years has been really bad, as the replication crisis turned into a dry cough; I.E. make up data all you want, no one will care.
I'd really recommend reading Darwin though. Going way back to the literal foundation really helps set the stage, mentally, and bring you back to what is really going on with relation to the wider human condition.
Just about any review article more than 10 years old is also going to be pretty good. I'd stay way from review articles less than 5 years old though, as things change and retractions come out.
I'll warn you though, the concepts and mental models that you've built up on the Engineering side are not really going to help you with the bio side. Yes, the study habits will help. But bio is really really complicated. You can't abstract the cow into a meter sphere of water. In bio, you really do care about that cell on the medial side of the fourth mesenchymal layer of the second stomach of the cow. You are going to have to get comfortable memorizing pathways and strange names for a few years before all the pieces will even start fitting together. Again, bio is something that's been surviving, ripping, and gouging, for ~4 billion years. She don't have time to stop and let us know what is up.
In parallel with Darwin also try more recent advancements,like Don Hoffman's "The case against reality", where they prove that the probability that evolution will equip an organism to see the true reality is Zero.
When in 18xx FDA or its precursor was being formed, its goal was to confine various "bioelectrical woo" present in medicine and biology at that time. And back then there was Rife's microscope, for example, which was able to accurately image living cells.
Yet no-one tried to account for the cumulative damage/adverse effects done by FDA approved treatments in comparison with a potential or actual damage done by such "woo".
With such apparent speed and quality of research thought we will never have anatomical compiler, let alone electroceutocals and anthrobots, on a routine basis at least in the next couple of hundreds of years.