You should have read the paper (or at least the abstract) first, as they used macaque and human organoid models, too. :) But yes, the overall effects of the region of DNA in question (an enhancer) are fairly small.
I have read it now. Comment below. As to an increase in mouse neocortical volume—they have ZERO data. Look at their figures 1g and 1h and weep for the crap reviewing.
> Stopping an experiment once you find a significant effect but before you reach your predetermined sample size is classic P hacking.
Although much of the article is basic common sense, and although I'm not a statistician, I had to seriously question the author's understanding of statistics at this point. The predetermined sample size (statistical power) is usually based on an assumption made about the effect size; if the effect size turns out to be much larger than you assumed, then a smaller sample size can be statistically sound.
Clinical trials very frequently do exactly this -- stop before they reach a predetermined sample size -- by design, once certain pre-defined thresholds have been passed. Other than not having to spend extra time and effort, the reasons are at least twofold: first, significant early evidence of futility means you no longer have to waste patients' time; second, early evidence of utility means you can move an effective treatment into practice that much sooner.
A classic example of this was with clinical trials evaluating the effect of circumcision on susceptibility to HIV infection; two separate trials were stopped early when interim analyses showed massive benefits of circumcision [0, 1].
In experimental studies, early evidence of efficacy doesn't mean you stop there, report your results, and go home; the typical approach, if the experiment is adequately powered, is to repeat it (three independent replicates is the informal gold standard).
There are of course statistical methods designed to support early stopping. But I don’t think you can use a regular p-test every day and decide to stop if p < 0.05. That’s something else.
You use full both sided ANOVA F test with multiple comparison correction for that. Even these tests are sometimes not conservative enough, because the correction is a bit of a guess.
You will end up with much higher number of trials required to hit the P value than the version with predetermined number of trials and no stopping point by P.
Say, in a single variable single run ABX test, 8 is the usual number needed according to Fischer frequentist approach.
If you do multiple comparison to hit 0.05 you need I believe 21 trials instead. (Don't quote me on that, compute your own Bayesian beta prior probability.)
The number of trials to differentiate from a fair coin is the typical comparison prior, giving a beta distribution. You're trying to set up a ratio between the two of them, one fitted to your data, the other null.
Multiple comparisons and sequential hypothesis testing / early stopping aren't the same problem. There might be a way to wrangle an F test into a sequential hypothesis testing approach, but it's not obvious (to me anyway) how one would do so. In multiple comparisons each additional comparison introduces a new group with independent data; in sequential hypothesis testing each successive test adds a small amount of additional data to each group so all results are conditional. Could you elaborate or provide a link?
> I had to seriously question the author's understanding of statistics at this point.
I think you may want to start the questioning closer to home.
Early stopping is fine as long as the test has been designed with the possibility of early stopping in mind and this possibility has been factored in the p - value formulation.
In lots of human studies, you can’t just stop at an arbitrary number of participants because you’ve counterbalanced manipulations to decorrelate potential confounders (e.g., which color stimulus is paired with reward, the order of trials).
The distinction is between ‘data peeking’, i.e. repeatedly checking the p-value you've obtained and stopping if it falls below 0.05, and repeating assays in the light of new information. Such new information can relate to the distribution of the values, the expected effect size, or any other parameter that you did not know at the outset of the study.
In ‘data peeking’, the flaw is that if an assay is repeated often enough, one will eventually get a result that deviates far from the mean result. This is a natural consequence of the data having a normal distribution, i.e. not all results will be identical. It's the equivalent of getting six heads or tails in a row (which should happen at least once if you flip a coin 200 times), and then reporting your coin as biased.
Repeating an assay because the distribution of the data is not what you thought, or because the likely difference between means is smaller than you thought is a valid approach.
Source: Big little lies: a compendium and simulation of p-hacking strategies
Angelika M. Stefan and Felix D. Schönbrodt
Sounds like a variable cost experiment. Each observation cost x$. Like an A/B split on Google ads. Why keep paying for A when you know B is better already.
It’s more like you are supposed to toss 1000 times and after 500 tosses you get a lucky streak of 5 heads in a row and then decide to end experiment and conclude that coin is biased.
Google Optimize used to tell you to let an experiment run for one-two weeks (?), exactly because early strong results tend to not don't hold up in the long run.
> The alternative to divorce isn't perfect marriages, it is failed marriages that are inescapable.
I'm sure this has nothing to do with you, but by your comments in this thread, I'm reminded of a conversation I had with a friend on a bus one day. We were talking about the unfortunate tendency, in daytoday, of people to shuffle their elderly parents off to nursing homes, rather than to support said parents in some sort of independent living. A nearby passenger jumped into our conversation to argue that there are situations in which the nursing home situation is for the best. Although we agreed with him, he seemed to dislike the fundamental idea of caring for one's elderly parents at all, and subsequently became quite heated.
I suspect that what's happening internally (at Microsoft) is that someone's leveraging your work towards their next promotion packet. They went to their manager with "hey I've got this great idea" and followed it up with your code a few weeks later. Of course, this only works if they claim they were "inspired" by Spegel to "write their own code".
> I suspect that what's happening internally (at Microsoft) is that someone's leveraging your work towards their next promotion packet.
It just so happens that the Microsoft engineer who originally changed the license in GitHub went from Senior to Principal engineer at Microsoft in the past two months (according to LinkedIn). So you probably aren't far off.
There is definitely a type of person who cheats, lies, throws people/teams under the bus, breaks the rules, and cuts corners to get ahead. The ones who are able to not get caught are rewarded.
This is not only a software phenomenon, but almost all aspects of life.
I wonder if there exists any system in place that this could backfire rapidly if this could be proved on some level. Unfortunately, world needs examples and consequences before anything changes. If this worked for this particular engineer, others will follow and will attempt the same. It will become a norm in big corps.
Causing a legal shitstorm is most likely not a sustainable way to get ahead at big corps.
If this is what happened, I suspect Microsoft will drop this person even quicker than a hot potato, and even quicker than if they told them to rewrite it from scratch but the person took a few shortcuts too many (which would be my guess).
If they wanted to fork it, they could - just keep the attribution and be done with it. The fact that they tried to rewrite it suggests that someone wanted it to be legally not a copy.
The commit histories for the LICENSE files in the two repositories are rather interesting. The original author placed a single copyright notice in that file. Microsoft on the other hand published it with their copyright notice and a Apache 2.0 license in place of the original copyright notice and MIT license. They also put copyright Microsoft and license apache 2.0 headers on all files. They then changed the Apache 2.0 license to MIT, but left their copyright notice in place of the original copyright notice in LICENSE:
Unless they forked a very early version that did not even have the LICENSE file, such that they never removed the original notice, this looks like copyright infringement to me. That said, I am not a lawyer.
What does "chore" mean in this context? Is the license just leftover from some MS open source template? If so there is perhaps some leeway, and the author maybe just didn't realize he needed to use the original MIT license file including the notices and not just a template one grabbed from the internet.
Any other explanation for such a "relicensing" would be extremely worrisome.
"chore" just means the type of change; as opposed to a fix, a feature, refactoring, there are some things that you have to do in the repo that can be called "chores".
Right. It derives from the idea that programmers are supposed to find "solving interesting problems" pleasant. On the other hand, boring, repetitive tasks are called "chores".
Some organizations strongly encourage marking all commits as one of a list of categories such as "feature/fix/chore/...". The tags are then bound to loose all meaning (literal or figurative) very soon.
Unless there was some "conspiracy" to violate the license (my original comment was an attempt at playfully hinting at that possibility, though I don't find it very likely), I'm sure the person who wrote that commit message thought about it for less than three seconds.
That was my initial guess as well. I am glad that the author chose to take a high ground instead of naming and shaming the people behind this egregious act.
It might be just a decision to own the code as it probably ends up in production, e.g. run codeql and other tools to scan it, have controlled releases and limit access to the repo. They might have had some other stuff to change and did not want to bother doing it in the original repo with unexpected timelines from the repo owner. A fork is a logical step for a company.
> So I mean, you probably don't want to have any leaks or weak stitches in your uterus transplant...
With this sort of surgery, they wouldn't be cutting into the uterus (womb) itself when extracting it from the donor, but instead will cut around it to remove it, along with some very essential plumbing. The receiving mum will also be on industrial-strength immune suppressants anyway.
Where you DO have to worry about leaks and weak stitches is with said plumbing (uterine arteries and veins) -- they have to support virtual firehoses of blood through the duration of pregnancy, and their damage is one reason why a delivery can go south very, very quickly. Obstetric medicine is definitely a high-risk sport, which is why their malpractice insurance rates are head and shoulders above any other medical specialty. But I digress...
The title in HN ("(Any) 8-hour time-restricted-eating window effective for weight loss") is heavily editorialized from that of the NIH blurb ("Timeframe of 8-hour restricted eating irrelevant to weight loss"), but actually better reflects the findings of the actual paper ([1], unfortunately paywalled). They found that people who fasted for 16 straight hours a day lost (a little bit) more weight over 12 weeks than those who followed a Mediterranean diet. However, the weight loss didn't represent a loss of visceral fat (around the abdominal organs, fat which is more likely to be associated with diabetes and cardiovascular disease) and so the essential finding was that the time-restricted fasting made no difference.
Fat distribution, including subcutaneous vs visceral, has very clear racial/ethnic genetic associations, not to mention sex. East Asian and especially South Asian groups skew much more toward visceral fat, while European and especially African groups toward subcutaneous fat. Beyond calories in/calories out, generalized advice in this context might not be as helpful on an individual basis as with other health matters. In the context of diet & weight things are already complicated, but at least in this area we know why and can more easily predict how one person's body is likely to respond vs another. (Though, it might just come down to some ethnic groups having to put in alot more effort--e.g. much greater reduction in overall weight--than others for the same reduction in visceral fat.)
Visceral fat has long-term memory, and also come as the last in the line. So the diet mentioned in the study may not have started the visceral fat reduction at all…
And I forgot, you have to exercise, HIIT, calories deficit is not enough.
Forget about Ozempic and other drugs, they are good for people with diabetes. And you have to use them for the rest of life, otherwise there is yoyo effect.
> However, the weight loss didn't represent a loss of visceral fat (around the abdominal organs, fat which is more likely to be associated with diabetes and cardiovascular disease) and so the essential finding was that the time-restricted fasting made no difference.
You're making a bit of a leap with "made no difference."
It's well-known that the body "holds on to" visceral fat in many cases, i.e. in order to reduce visceral fat, we first have to lose all the other excess fat. Which the TRE diet achieved: 5-7 pounds in 12 weeks is no small feat!
> You're making a bit of a leap with "made no difference."
I was paraphrasing the results of the study, which was designed specifically to see if fasting would reduce visceral fat as compared to a non-fasting regimen. If you read the abstract I cited, you'll see that there's not even any mention of overall body weight in the abstract -- that finding is buried in a figure of the paper, and mentioned basically in passing.
As for losing losing visceral fat versus other fat, that's partially true, but reality is a little bit more complex than that. Two people with the same 20% body fat can have radically different proportions of visceral and subcutaneous (under the skin) fat, and it's the person with more visceral fat who is at risk. This is why you have studies like this designed to find ways to target visceral fat.
those read the same to me, to be fair; although the important bit is "fasting for 16 consecutive hours", perhaps that gets to the point more effectively.
I've read that intermittent fasting has more "holistic" value than just losing a little bit more weight, specifically on blood sugar or insulin levels, as well as fat storage.
weight loss for health reasons should probably be coordinated with an expert who can look at your contemporary and historical blood tests. To be safest.
It's a bit to do with a change in diet and lifestyle to accompany the eating window but there is definitely something else at play as well.
The human body is an amazing machine and it has all sorts of abilities that we are unaware of. When you starve, your body starts shutting down non essential things first, starts pulling nutrients from everywhere it and limiting activity. Starvation has both a physical and mental element to it - both during the process and following it.
Intermittent fasting has been demonstrated to start a regenerative process in the body. It triggers cellular autophagy, which is kind of like running a cellular defrag.
There have been a lot of studies lately that look into the regenerative aspects of deep sleep following a serious injury - I sus that's the same system behind both things.
In response to the stress of not eating as usual, the body reacts. The mind does too. It sucks while you are starting it but it's nice to be able to know that you can skip of day of eating and be fine. After eating a big dinner and a good night's sleep you should have more energy and feel better for no real reason. I sus this has to do with how we ate while we were evolving - life was just a cycle of involuntarily intermittent fasting.
Unless you do strenuous activity all day - food is energy, you will be wore out of you do too much. The food you first eat after matters too!
Don't make a donut or highly processed/sweetened food the first nutrients after fasting - you'll feel like you ran a marathon. Simple carbs and protein - rice and black beans or oatmeal with seeds is typically what I do.
Everyone is different tho - whatever works for you! All the best of luck, sorry this is apparently my rant for the day - better topic than normal
I've been following Jason Fung and "intermittent fasting" for six or seven years.
I notice that the specific wording "time-restricted eating" has gained popularity in the past couple of years, possibly because "time-restricted" is less of a red flag to the public than "fasting," which may bring up some emotional baggage.
The reason for renaming is just speculation on my part - what's clear is that the eating protocol is the same, only the wording is different.
when i hear "intermittent fasting" i think once or twice a week. Also it's not a silver bullet, but it does "shrink the stomach" a bit if you are mindful of the volume of food you consume to break the fast. better to, as your sibling said, break with a light meal with all the macronutrients and a decent chunk of micronutrients and vitamins than a lumberjack's breakfast. Or if you break in the evenings, the buffet is probably not ideal.
Maybe when people realize they don't need as much food as they thought - especially grasses (sugar, wheat, corn, specifically), they can "change their relationship with food."
or just ozempic i guess, what do i know.
hey i wanted to ninja and let you know i have no problem with what you or sibling said, at all. I'm just speaking to the topic, not trying to argue. I realize a skim makes it seem like i was disagreeing!
Both are very reasonable features, of course. Here are (some of) the real-world challenges to their implementation:
#1: Requires competence, and/or management that isn't too focused on velocity and features to listen to their engineers' warnings about exactly the sort of problem being discussed here.
#2: Many firmware updates explicitly and specifically want to strip away features that the hardware shipped with (by introducing DRM, paywalls, etc.), so see the comment about management above.
reply