Once I started learning more about biology, I realized that everything is just so complex. The body repurposes chemicals a lot, so you have things like serotonin being a key neurotransmitter in the brain, but also in the gut. And you have enzymes that are coded in genes, but then there are also networks of genes that are up- or down- regulated by hundreds of other genes, and sometimes only in certain types of cells or certain physiological environments. And then of course there are epigenetic and immune-modulated effects at the genome, gene network, and individual gene levels. Not to even mention all the feedback mechanisms and meta-feedback mechanisms (the drive toward homeostasis is POWERFUL), and effects of countless chemicals in our environment.
There are certainly clear-cut cause-effect relationships in biological systems, but even they will have edge cases and random chance to muddle the picture.
I would posit that the human body is far more complex than even the largest codebase, not least because it was jury-rigged together with no architect or style guide.
Also, in general, the more common the exposure, the harder it is to find a link; try finding a control group of people who have never been exposed to PTFEs, or HSV, and who also aren’t like hunter gatherers.
The problem is simply observational. We don't even have reliable DNA and RNA sequencing of our own bodies. And we cannot reliably observe things in a host without knowing, to some extent, what we're looking for first. Even that space is so large, it's very hard to ascertain accurately. Biology is always suffering for lack of clear observations.
Also, adding complexity is the difficulty or even literal impossibility of observing the direct interactions of elements of the system, which operate at a quantum scale, that you would disturb and do disturb when attempting to observe.
"Everything in biology is more complicated than it looks."
DNA is where we get our physical attributes (modulo environment).
No, a lot of DNA is "junk," i.e. we don't yet understand what it does.
No, a lot of functional DNA is turned on or off by the epigenome.
No, a lot of our metabolism is affected by our biome — thousand of species of bacteria that turn up or down various reactions, or produce other chemicals that we need...
Well there’s your problem: no one can make money off of it, unless they develop a new delivery mechanism, etc.
Patents encourage developing new medicines, but not developing new knowledge about (never mind use of) old medicine.
The solution (in the US) is obvious: federal funding of research that stands to help lots of people but not make lots of money. Since most of these patients (in the US) are going to be on Medicare, there could be huge potential cost savings to the taxpayer: memory care is EXPENSIVE, so even the paltry amount covered by Medicare racks up (and the opportunity costs of people paying for private memory care is enormous).
But instead of increasing funding for this kind of life- AND MONEY-saving research, this administration is freezing and slashing research funding, and specifically targeting Columbia for political/Trump’s-petty-grudge reasons.
> Well there’s your problem: no one can make money off of it
You can patent new applications of an existing drug. This has been somewhat of a problem, as companies can just look at how drugs are being used off-label, and patent some of these uses.
People are doing this with vetinary drugs too. A company called Tarsus Pharmaceuticals recently developed a drug called xdemvy by repurposing an anti-flea drug for dogs. It basically cures an eyelid condition called demodex blepharitis. They're a $2b company now.
They barely had to do any new science. It just took some creativity and almost $250m worth of clinical trials.
I suspect the lucrative patent system has helped create rather exorbitant costs and restrictions for performing trials, which hinders non-patentable research, ironically.
But both the federal and state govts do fund tons of such research. Some states have specific Alzheimer's trials and funds. I would think they could handle dirt-cheap therapies like this without getting into sweeping political changes. Though I suspect the solution is much harder than just run a trial with the drugs we have, or else we would already be hearing about mountains of evidence from doctors using the medications off-label.
>Well there’s your problem: no one can make money off of it, unless they develop a new delivery mechanism, etc.
You hit the nail on the head. Ketamine is a generic drug that costs next to nothing; Spravoto (ketamine-derived nasal spray) is already a billion dollar/year drug for Johnson & Johnson, with prospects of $5 billion/year.
Worth mentioning that the evidence says that patents don't have an effect on new drug creation/inventions. Evidence is collected here http://www.dklevine.com/general/intellectual/againstfinal.ht..., pretty neat to know that Italy/Switzerland had a patentless pharma industry until quite recently.
Having said that, I think you're right that under this system, research/capital definitely gets directed in a different way.
A major reason Alzheimer's research hasn't advanced in the last 25 years is that patents aren't long enough to study it. Remember: patents don't kick in after the FDA approved your drug. It's after you develop it. That's why ozempic is going off-patent in a just few years even though it's a new product. They patented it a long time ago.
With Alzheimer's though, the clinical trials are going to take a long time. Probably 10 years at least, because our current understanding of the disease is that it begins in your mid to late 40s, and only manifests as severe memory loss decades later. Our current method of trying to treat it is like putting someone in pallatiave care with stage 12 cancer through chemo. Just doesn't work.
But drug companies have no choice because if they run 10-15 year trials, their drug will be off patent before the FDA/EMA even looks at it.
If I were King for a day, one thing I'd do is a blanket 40 year patent life on Alzheimer's drugs. It's worth the cost. This disease will bankrupt every nation otherwise.
While I understand the narrative you're proposing, what I brought with my source was a collection of evidence where pharmacological innovation happened at an unaltered rate pre and post patents in e.g. Italy and Switzerland. While I understand the hypothesis of "Pharma innovation, due to high costs of entry, only happens (or is greatly improved) when guaranteed a monopoly", it doesn't seem to be backed by the data.
I agree with you in principle though - if all that were stopping us from achieving a cure were a 40 year patent, I would support your 1-day monarchy in a heartbeat.
Chapters 9 and 10 of the book cover this in more detail if you're interested (very self-contained).
This is bullshit. Drug research costs money, A LOT OF MONEY. A new drug right now costs somewhere around $5 billion, mostly because 90% of drugs fail in trials.
mRNA vaccines, semaglutide, mAB therapies, none of these would have happened without patents as an incentive.
Then why is it that when pharma patents were introduced in countries that didn't have them, the rate of innovation, TFP, R&D-as-%-GDP didn't increase? I brought a source to this debate, if you have sources showing that increases in patent scope, length, or introduction of patents increased pharamacological innovation I'd love to see it - I'm going down this rabbit hole now and am collecting info.
Another interesting one is [1] where they asked readers of the BMJ to vote on the top 15 most important medical milestones. Of the 15, only the contraceptive pill and Chlorpromazine had anything to do with patents.
In [2] the "Chemical and Engineering News magazine" collected a list of top pharmaceuticals (46 total). To quote the book I linked:
> Patents had pretty much nothing to do with the development of 20 among the 46 top selling drugs [..] . For the remaining 26 products patents did play an important role [..]. Notice though that of these 26, 4 were discovered completely by chance and then patented (cisplatin, librium, taxol, thorazin), 2 were discovered in university labs before the Bayh-Dole Act was even conceived (cisplatin and taxol). Further, a few were simultaneously discovered by more than one company leading to long and expensive legal battles, however, the details are not relevant to our argument.
Regarding the cost of drug trials, they cover this well in Chapters 9 and 10, I found it quite interesting.
Regarding how else companies make money without being granted temporary govt-backed monopolies, Chapter 6 covers both the theoretical and real-life examples.
If you have termites, you don’t just light the house on fire.
So many tech people try to solve all the problems of Gov tech in the executive branch, which is intentionally slow and conservative. And yet, watch any Congressional hearing about a tech topic, and it’s painfully obvious that Congress has very little expertise in tech issues on staff.
Instead of going 12 rounds with OIRA about the PRA (which I hate as much as the author does), what if we…changed it?
The Judiciary also has no idea how to think about tech issues.
Don’t blame the executive branch for the perverse constraints and incentives created by the Legislative and Judicial.
I've been wondering for a while now why we aren't pushing for more technologists in office. I know most of us don't feel ourselves to be temperamentally suited, but it seems sorely needed.
Maybe some of the recent grads who find themselves in a losing tech job market can pivot.
it's expensive, the risks of losing are big and they dig into your whole life and basically ruin it. But agreed more people should be in these roles or at least advising these people, but the money is bad and it requires a different set of skills.
I think it'd be best to start with getting people to run for local, non-partisan offices. School board, etc. You're right that trying for anything higher than that is going to run into life ruining amounts of interference from existing interests, but I think it could be done at the lowest levels first.
> the money is bad and it requires a different set of skills.
Which is one thing that makes me think of recent graduates. Recently retired people/people in tech who've FIREd also might be viable. People who either can't get a high paying tech job or who had them and are past that stage in life - politics is better than service sector work (for the recent grads) and the retired wouldn't depend on the money.
Skillswise, there are more people going into CS who don't have a passion or intuition for technology - we pushed a lot of people into CS and STEM in general over the last decade or so who wouldn't have pursued it in the 90s-2010s. I bet there are lots of C students who could do a better job at understanding tech than our current leaders and a better job at communicating with non-technical people/schmoozing than most of the talented techies.
> at least advising these people
I think they need to hold office. Advising isn't going to cut it - the incentives for our current politicians to listen to this group aren't there. The only incentive lever we have any hope at pulling (outside of radical system change) is threatening their seats.
“That today Microsoft is a giant company is irrelevant...”
I am not too young to remember the old Microsoft. To say that Microsoft is “irrelevant” is so myopic. Despite Tesla, GM is still relevant. Despite AWS, DB2 mainframes are still relevant. Heck, I have to work with EBCDIC data, a format designed to not produce holes in punchcards that are too close together. Even when we eventually move to a modern db, decades of archival data is not going to be converted from EBCDIC.
Windows might be irrelevant to FAANG or MANGA or GAMMA or whatever, but how many Fortune 500 companies don’t have a significant Microsoft presence?
Apple computers are pretty nice, but they’re expensive, and the vast majority of employees do fine with a cheap PC and Microsoft 365—why would a company pay more for unnecessary hardware that also requires rebuilding a bunch of IT systems, not to mention retraining thousands of employees.
I didn't say Microsoft is irrelevant. I said the fact that it's still a huge company is irrelevant when judging whether the old Microsoft was in fact dead or not. The new Microsoft is highly relevant, but Microsoft's philosophy of "Embrace, Extend, Extinguish" to maintain a grip on consumer compute is dead. If anything is the heir to that, it would be AWS.
"Dead" apparently means "no longer the unquestioned industry leader" - which seems like an odd definition of "dead" to me, but ok.
The industry in question being the union of personal desktop and laptop computers, associated software, and internet-related technologies.
What actually happened was the internet-related sector broadened to include new sub-sectors - mobile, search, social, media, cloud, e-commerce, and ad tech - all of which Microsoft either ignored, failed at, or didn't dominate.
The old industries are still there but they're the tail, not the dog.
The dog is far more consumer and consumer-adjacent. MS culture was always more aligned with corporate goals and office productivity. MS never got social and lifestyle computing, which is where the industry was heading. It still doesn't, even in gaming.
AI is going to see a similar shift to a completely different mode of computing, but it's too early to tell how that will work out. At a guess it's going to be much more directly political than anything we've seen so far. (Not in a good way, IMO.)
The amount of effort apparently required to satisfy all the checkboxes around "a cheap PC and Microsoft 365" is astounding. My Fortune 250 laptop runs 3 different security "endpoint" products, and literally dozens of scripts fire each day/hour to make sure that things are "correct" according to every suggestion any consultant ever made towards our senior IT staff. And they replace the entire fleet every 3 years. I believe that starting with longer lived hardware with an inherently more secure environment that didn't need to be groomed like this would be a net savings, but I don't have the numbers to prove it.
> To say that Microsoft is “irrelevant” is so myopic. Despite Tesla, GM is still relevant. Despite AWS, DB2 mainframes are still relevant. Heck, I have to work with EBCDIC data, a format designed to not produce holes in punchcards that are too close together.
There's more free energy in growing things.
Leave shrinking things to private equity.
There might be a lot of money in programming COBOL, but who wants to do that? It's not exciting to be a buzzard and subsist on carcasses.
Microsoft is irrelevant as far as the platform and where most development energy is focused. Can you imagine trying to get funding for a Windows application?
The number of EOs is a poor measure of the extent of executive power. There are many other ways to concentrate executive power; for example OMB (part of the Executive Office of the President) issues “M-memos” (“M” for “management”) to the heads of all departments, instructing them on how to implement EOs, laws, and White House priorities.
There is a consensus among researchers of the workings of the US government and the legal context thereof that, since FDR and especially since Reagan, that the “imperial presidency” has been gaining ground.
Unless you want to “write a Shakespearean sonnet about Bayes’ rule”. Here’s what Claude sonnet (ahem) came up with:
"Bayes's Revelation"
What knowledge held before new facts arrive,
Our prior thoughts, though incomplete they be,
Must yield and change as evidence does strive
To shape posterior probability.
Through multiplicative steps we find our way,
As likelihood with prior beliefs combine,
The normalizing constant holds its sway,
While truth emerges, clearer line by line.
O Reverend Bayes, thy theorem divine
Shows how our minds should learn from what we see,
Each observation helps our thoughts refine,
Till posterior wisdom sets uncertainty free.
In statistics' realm, this rule holds fast:
Each present builds upon what's known from past.
The sonnet weaves together the key components of Bayes' theorem - prior probability, likelihood, and how they combine to form the posterior probability. Would you like me to explain any of these statistical concepts in more detail?
Although it would be an easier change now that Bard is Gemini…
This is a study of people in their 70s. The vast majority of people with hearing loss in their 70s lost it late in life; they have no Deaf/HH community, they almost never learn to sign, and they often struggle to adjust for their loss of hearing.
The study you linked talks about reduced stimulation, and in particular _social_ stimulation:
> when an individual suffers from moderate to severe hearing loss, they are less likely to participate in social activities. Perhaps they are embarrassed about their hearing loss. Or they may simply find it unrewarding to attend a social event when they cannot hear what is going on.
People who are born deaf/hh , or who lose their hearing early in life, if they are allowed to access and participate Deaf/HH communities and spaces, simply do not have any of these difficulties in social contexts within those communities.
Martha’s Vineyard had an unusually high rate of congenital deafness for centuries [1]. It became a place where everybody, deaf and hearing alike, used sign language regularly. In such a society, being deaf was not a significant impediment to participating in social society at all; I am aware of no evidence that would suggest the dementia rates would be higher for the deaf residents just because of their deafness.
A disability is only a disability in a given context; for some conditions (eg advanced ALS), they are disabling in almost all contexts, while for others (eg a food allergy), they are disabling in a relatively narrow set of contexts. The relationship to dementia is caused by the hearing loss mis-fitting the individual’s context; people with the same condition but different contexts would not be deprived of stimulation and therefore not susceptible to dementia in the same way.
[1]: https://en.wikipedia.org/wiki/Martha's_Vineyard?wprov=sfti1#... (Martha’s Vineyard sign language is a major source for what became American Sign Language. The other was French Sign Language, which is why British Sign Language and ASL are quite different despite sharing the same local spoken language)
I 100% agree with the line you quote and refute in your reply, which I've repeated below...
> when an individual suffers from moderate to severe hearing loss, they are less likely to participate in social activities. Perhaps they are embarrassed about their hearing loss. Or they may simply find it unrewarding to attend a social event when they cannot hear what is going on.
This has been my life experience since the late 60's. It's my life right now.
You replied...
> People who are born deaf/hh , or who lose their hearing early in life, if they are allowed to access and participate Deaf/HH communities and spaces, simply do not have any of these difficulties in social contexts within those communities.
As someone who's been hard of hearing for most of their life, I'm curious exactly where these "HH communities" might have been in 1969, or the 70's, or 80's, or even now in the 2020's? Beyond the occasional subreddit that is. I suppose in elementary school the teachers could have put me in special ed classes. Or made me sit in the front of the class all the time. I'm glad they didn't do either.
The local community college used to show lectures for certain classes on the cable TV. They had lectures on “Deaf Culture”. The lecturer would use the word “hearies” and generally made a good case for the existence of deaf culture. I am a “heary” and I found these lectures eye-opening.
Or you could read the article and use common sense?
> At first, AirWayBill’s managing director, Khaled Sehly, declined to discuss whether the app could be breaking any TSA regulations, asking to go off the record. (I refused.) He then asked me to wait four weeks until they had real users, saying that the reviews left on the app were actually from peers rather than customers. In our next series of broken calls, Sehly directed me towards the app’s terms and conditions and privacy policy, which he says were designed by a reputable law firm. Section 8 of the terms and conditions states that users must comply with technical and legal obligations and restrictions, including customs rules. However, it also indemnifies AirWayBill for liability related to shipments, stating that “any request will be made or accepted at the Members’ own risk” and that unless it’s explicitly specified otherwise within the platform, “AirWayBill’s responsibilities are limited to the correct functioning of the app and its service to the interested parties.”
> Unfortunately for wary couriers, AirWayBill is not planning on running background checks on anybody using their services. Sehly says that delivering packages for strangers—or at least friends of friends—is something that he’s seen done in an unregulated way in the Middle East and parts of Europe, with people asking if someone could deliver items ranging from documents to baby food to their families. Still, the possibility of inadvertently smuggling who knows what still remains. “We always urge people not to carry anything that they are not super confident about,” Sehly says, pointing out that deliverers can inspect the items they plan on transporting.
So the company is obviously taking the caveat emptor aka “f around find out” approach. Drug smugglers already have a lot of skill hiding drugs in seemingly innocuous items (sewn into teddy bears, etc.), so it is not at all an “aggressive assumption” to think being a courier with this service has potentially life changing (i.e. incarceration) risks.
> saying that the reviews left on the app were actually from peers rather than customers.
We need to start arresting people for this. You can't sell a product with fake testimonials. If you're not selling anything, lie to your hearts content. But this is fraud.
But this wouldn’t be in orbit; it would be in what NASA calls “deep space”, which relies on the Deep Space Network [1]. The DSN is severely bandwidth constrained, due primarily to a lack of ground antennas. Indeed, for instruments that are located outside Earth’s orbit (e.g. SOHO, which is at Sun-Earth L1 [2]), bandwidth is often a limiting constraint in the design.
My understanding is that some newer instruments do both compress and select data to be downloaded (i.e. prioritizing signal over noise), and that there is more and more consideration of on-board processing for future missions, as well as possibly introducing the capability within DSN itself to prioritize which instruments get bandwidth based on scientific value of their data.
Source: A presentation from people at NASA Heliophysics last week, where this very topic came up.
The DSN is a radio network. In its present form, this is going to be ineffective for receiving a meaningful amount of imagery data from signals emitted by a lightweight space probe at 500AU. At ~150AU the current 25-70m dishes are getting less than 40 bits per second from Voyager 1.
Instead, we would use lasers with a far superior gain to what radio communication is capable of. The divergence on even a decent pocket laser pointer diode is less than 0.1 degree. This is a gain of 10*log10(41,253/(0.1*0.1)) = 66 degrees. Launch telescopes of modest size can increase this further. Then receiver telescopes fitted with narrowband filters can hone in on that laser signal.
> "First, transmitted beams from optical telescopes are far more slender than their radio counterparts owing to the high gain of optical telescopes (150 dB for the Keck Telescope versus 70 dB for Arecibo)." - https://www.princeton.edu/~willman/observatory/oseti/bioast9...
From the interview linked, it sounds like the current plan doesn't involve the DSN at all: they're effectively out of transmission range past a certain point, and the transmission back is optical, using a big earth or space-based telescope. Which is one of the scary things he mentioned: they're going to be entirely autonomous when collecting the data.
There are certainly clear-cut cause-effect relationships in biological systems, but even they will have edge cases and random chance to muddle the picture.
I would posit that the human body is far more complex than even the largest codebase, not least because it was jury-rigged together with no architect or style guide.
Also, in general, the more common the exposure, the harder it is to find a link; try finding a control group of people who have never been exposed to PTFEs, or HSV, and who also aren’t like hunter gatherers.