If you look at the scatter graphs, it's not terribly impressive. Lots of overlap, and most of the results show no difference. The one that does show a large difference still has a lot of overlap in the scatter graph, and it seems to be some outliers that bring down the mean value.
It's kind of an extraordinary claim, which requires extraordinary evidence. This isn't that.
What interests me more is why HN users keep posting this study, and voting it to the top of the home page.
almost everyone on HN is interested in biology and especially biohacking but does not actually realise they have none of the necessary expertise to judge claims coming from that field; the level of discourse on these topics is sometimes worse than what I spot on reddit
Pish Posh! My understanding of C pointers makes me one of the smartest people in the world! Why else would I have cleared the hurdle at $COMPANY during a tremendous bull market!
Clearly I can learn any discipline by mucking about for a few afternoons. What is a billion years of natural selection to my towering intellect?
To be lovingly snarky: HN in aggregate is incredibly skeptical of academic research, claiming reproducibility crises everywhere, misalignment of incentives, brutal working environments whose results can't be trusted, only in mice, etc.
Unless it either confirms their existing beliefs, or is so technical that they don't understand it. Then the research must be ok :)
I concur; the graphs didn't show much difference to me (but I was on my phone so I couldn't be certain :-) The study seems impressive from the title, but reading it not so much. The Alzheimer's patients in the study are older, more male, and less educated than the controls, so it seems quite likely that they are eating worse which would explain the different microbiota. This could mess up the mice in multiple ways. (Seriously, the mice that receive Alzheimer's microbiota could have a stomach ache and do slightly worse on the tests.)
As for why there is so much interest on HN, I've noticed that Alzheimer's is one of the topics that is overly popular on HN, along with category theory, Lisp, and problems with Tesla, to name a few.
Not really weighing in one way or another, but I do not see scatter plots in the paper other than one PCA plot (which was not LDA... not sure why they chose PCA, so no strong reason to expect point separation there, though I do not know the diversity metric they are using so feel free to ignore my commentary).
Violin plots always look like they overlap. I think seaborn's split=True should be pushed a bit harder.
This field is littered with studies like this, measuring hundreds of different outcomes and then presenting the statistical noise as positive findings.
It's kind of an extraordinary claim, which requires extraordinary evidence. This isn't that.
What interests me more is why HN users keep posting this study, and voting it to the top of the home page.