For those not in Academia, it's worth noting that when you get to his level, it's more like being a VP or senior leader. Ultimately, yes you are responsible for the quality of output of your team, but you are absolutely not looking in detail at every paper, more the ideation and focus of the department.
He should be grilling his assistant profs / research fellows to get their act together and raise the bar, but this doesn't show malfeasance.
It doesn't mean we shouldn't work to fix the systemic issue though. It may not he his "fault", but how do we improve our system as a soceity so we put ourself on a upward trajectory not the seemingly downward trajectory we've been on lately.
I'm glad the GP pointed this out, because nobody talks about it. The "PI" of a lab is manager, so these issues are like an accountant embezzling money without the manager knowing. Maybe the manage should have put better processes and monitoring in place, but they were trusting their team members to behave properly, which is not unreasonable IMO.
The real issue is that the people who conduct the fraud are usually grad students or postdocs, whose entire future depends on the success of their research project. Fake results are pretty much guaranteed.
Think about it this way: Imagine you have a lab group with 10 PhD students, each with their own hypothesis to investigate (e.g. intervention A will reduce the rates of disease B in mice). What are the odds that all 10 students will prove their hypothesis and generate publishable results? No way it's 10/10 obviously...it's more like 5/10. So what is supposed to happen to those 5 students that were tasked with investigating the bad hypotheses? Our current system implicitly penalizes these students to the extent that their careers in academia are over, and even earning their PhD is not certain. BTW, I know I am being overly general here, and the student will likely have several parallel projects on-going, can pivot to other things, etc, but hopefully my general point is clear.
It is not unusual, and indeed it is what should be expected, for honors to be accompanied by liabilities.
Ambitious PI's want bigger labs that lead to the recruitment of better students, who then produce more impactful papers, which then support the demand for more funding. It is a positive reinforcement cycle that eventually leads to bigger, better, and more popular labs. Those are the honors.
The liability is that if your name is in the article as a senior co-author, you are just as responsible as the first author for errors or fraudulent research. The senior PI's actual contribution should not matter, their name is there, the publication is used to support their career, they recruited the students or postdocs.
> you are just as responsible as the first author for errors or fraudulent research
I know what you're trying to say, but I think you're making it too black-and-white. There are two nuances I'd like to point out: Firstly the senior author did not actually perpetrate the fraud...this has to mean something when assessing blame I think. Secondly, the senior authors do not really have the ability to filter out fraud, assuming it's done cleverly. What can they do aside from reading the drafts and scrutinizing the data/methods/interpretation? Are they expected to have a team of shadow PhDs doing the same experiments to ensure reproducibility?
No doubt some PIs create an environment that encourages fraud, and that's a problem. But the point I'm trying to make is that if we want to solve the problem of scientific fraud we need to be honest about the source of the problem. In my opinion, it's the fact that a student's entire future is wholly dependent on a good result. The senior author already has a job, probably tenure, and plenty of other projects on the go, so one failed project is not a problem. The cost of failure to the student on the other hand is essentially infinite!
> What can they do aside from reading the drafts and scrutinizing the data/methods/interpretation?
You would surprise how few of the big lab's PI's even do that. And since a big lab, say in biology, can send out 40–50 papers a year, there is no time for the PI to think deeply about hypotheses, methods, data collection. But having a big lab is a decision, as I wrote in my previous comment: honors/grants and liabilities.
> In my opinion, it's the fact that a student's entire future is wholly dependent on a good result.
That's very true, but there is also a thing called personal responsibility. Any non-violent "fraud", any "criminal", has some reasonable motivation behind their actions. But committing fraud is not an inevitability, and a lack of strong punishment that has origins from understanding the motivations behind those actions punishes people who behave, loosely speaking, properly.
Years ago, when I was doing academic research, I asked a colleague of mine if they would change some of their research results if the fraud (a) was never discovered and had no general consequences, (b) led to a publication in Science, Nature, Cell, etc. that would semi-guarantee a tenure-track position, and with that, "bread on the table" for the family, the kids, the aging parents.
They said they would never do that, but was it true for them? Would it be true for me?
Since the question is legitimate, strong punishment is needed to reduce the occurrence of fraud in research.
And since there is a tenure-track position available for dozens of good applicants, it is natural that a good result will make the difference between having a professional life in academia or not. But is it not the same, with the kind of "good result" depending on the field, for all those fields in which there are many more participants than "winners"? An immediate parallel can be made with doping in sports.
Your point is clear and extremely important. Edison supposedly once said, "I didn't fail. I just discovered 99 ways not to build a light bulb." Or something like that. Among those 99 failures were 20 super meaningful discoveries. In those failures the world's understanding of material science advanced in ways that affected a million later research projects.
A PhD candidate who can prove a hypothesis wrong should often have their work valued as much as one who proved the opposite.
But consider something like the invention of Paxos. If you leave out one small piece, you fail. All that time and effort seems wasted. You haven't proved anything true or false. You've just failed. But if you've documented your failure sufficiently, somebody might come behind you and fix that one little piece you got wrong.
One of the problems with our current system is that three years or ten years of research never gets published or properly documented for posterity because it didn't succeed. Even failures should be written up and packaged for the next grant to extend the exploration. There needs to be some reward for doing that packaging. Maybe we can call it a PhaD (almost PhD). Do you award a PhD to those who take up their own or somebody else's PhaD and complete it successfully?
I had a bit of a eureka on this subject this afternoon: when looking at scientific fraud and who to blame, we (as a society) tend to focus on who stands to gain if the fraud is successful, but instead we should look at who stands to lose the most if the fraud is caught.
Let me explain via a 2x2 matrix (which I highly doubt will render properly, but here goes):
Actor | Fraud is successful | Fraud is caught
------------| -------------- | ----------------
Professor | Scenario A | Scenario B
Student | Scenario C | Scenario D
Scenario A: If a fraud is successful, the senior author gets a small benefit in the form of a slight raise, incremental increase in success rate of next grant, maybe an award, some endorphins from the praise. Very minor actually.
Scenario B: If the fraud is caught the senior author's career could be in shambles, like resigning from their tenured position, losing investors in spin-offs, humiliation, etc. They have a lot to lose.
Scenario C: In the event of a successful fraud, the student stands to gain a lot in the form of job prospects, future income, and generally accomplishing their life's ambitions. There is a huge payoff for the student in this scenario.
Scenario D: If they don't perpetrate fraud (to salvage a bad result), their career in academia is over, they have wasted 3-4 years of the life, which is the same outcome as if they did perpetrate fraud and got caught. The student has nothing to lose!
Worth remembering that Linus Pauling went off the deep end on Vitamin C. Not that he faked stuff, but that even very smart people get sucked into wierd science sometimes.
So leaping from this, to Semanza is emperor's-new-clothes is pushing too far. If he's moved into a space where marginal behaviour around research is normalised thats really bad but it doesn't have to mean his Nobel wasn't warranted, that depends on other questions: Questions which might still need to be asked, but this isn't the smoking gun. It's maybe more why asking those questions might become more important.
Einstein's discovery of relativity is something everybody takes for granted now, but think about how exceptionally ridiculous the notion that light travels at the same rate of speed from all perspectives is; or how the universe will bend over backwards, including having people 'advance through time' at different rates of speed or literally physically change the distance between things, in order to maintain this perception. It's like some sort of a simulation filled with spaghetti code, yet it's our real universe and has been as 'confirmed' as anything can be in science.
But... imagine it wasn't. If you had a 'logical and normal' understanding of the universe, and heard of somebody researching something with these sort of implications, then a normal person would probably think they had simply lost their mind. Calling it just "weird" would be a dramatic understatement.
We always have, and probably always will have, an arrogance of the present. It's explained by a tautology - what we think is right, is what we think is right. Or otherwise, we wouldn't think in such a way. Yet the entirety of human history shows that sooner or later, much if not most of what we think we know to be true, will be proven to be simply wrong. Sometimes it will be in the 'needs refinement' bucket, but much of it will also just end up in the completely and fundamentally wrong bucket. And the scientists willing to indulge 'weird science' are those who will take us there, because you're not just cleanly stepping from one branch to another A->B->C, but just completely scrapping the tree and starting with an entirely new root node.
Einstein did not discover that the speed of light was fixed. This was a consequence of Maxwell's research on electromagnetism in the late 19th C.
It is entirely reasonable for any physicist (indeed, Maxwell himself could've done it with enough time), to work through the consequences of a non-relativistic equation for the speed of light.
This approach is routine in physics, and there's nothing in it that would appear crazy or unusual. Physicsts do not simply just pluck crazy ideas from their imagination and, from-nothing, deduce possible consequences. This is a myth, and insofar as people do that, they're cranks.
The imagination of a phyiscist is heavily guided by the totality of physics that has been developed. Einstein did not simply say, "what if the speed of light was fixed?"
He said, 'given we have good reasons to suppose it is, what are the concequences on relativity ?'
> Einstein did not discover that the speed of light was fixed. This was a consequence of Maxwell's research
Maxwell certainly did discover that the speed of light was fixed.
But he did not discover that it was fixed for all observers. It was presumed to be fixed relative to some reference medium that permeated the universe.
There were 40 years of experiments after Maxwell figured out the speed of light where people tried to show how the aether worked and how the Earth moved through it, before Einstein "discovered" the idea that the speed of light was fixed relative to all observers rather than relative to the universe.
Which is deeply weird, if you live in a universe where nothing like that has ever been discovered before.
> There were 40 years of experiments after Maxwell figured out the speed of light where people tried to show how the aether worked and how the earth moved through it, before Einstein “discovered” the idea that the speed of light was fixed relative to all observers rather than relative to the universe.
I'd add a not quite to you as well! The Michelson Morley experiment is fabulously interesting. There's a famous quote from Michelson from 1894:
"...It seems probable that most of the grand underlying principles [of physics] have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals."
Note that quote is from 7 years after his experiment. Michelson did not believe his own result! The idea that he was discovered was correct, was so utterly weird and difficult to understand, that the possibility of it being true simply did not even occur to him, or most of anybody for that matter! He actually thought that his failure to measure the speed of light was related to instrumentation or precision, and he would repeatedly pursue various efforts to try to correct his mistake, which was not a mistake at all!
Basically he showed that (from the perspective of the time) 2+2=5. Naturally he rejected this without even really considering it, while Einstein not only accepted that it might be true, but then spent years working out an explanation for why! Had Einstein been wrong, it would have looked like the odd behavior of a man battling with his sanity, even more so given that much of his key research came while he was working as a patent inspector, no university of the time being willing to take him on as staff!
All you're saying is that it's Maxwell + Michelson–Morley (who discovered there was no aether) that we should thank, more than maxwell alone. That's true.
That is even moreso to my point that einstein did not just say, "what if crazy-unprecedented-idea? "
Add more names if you like, that's only further evidence of my view.
Again, he didn't _discover_ it. He just _assumed it_ and worked out the consequences if it were true, which turned out to be _very weird_. Then it was confirmed through experiment that he accurately predicted the consequences of it.
No, experiments showing a fixed speed of light came first. Ex: 1887 https://en.wikipedia.org/wiki/Michelson–Morley_experiment It’s a critical point people miss. Rather than come up with his ideas in a vacuum Albert Einstein instead was trying to explain some results which seemed really counterintuitive.
He made many other predictions which where tested afterwards. People also repeated the speed of lights tests in new ways, but the light experiments where consistent with earlier results.
- "Einstein did not discover that the speed of light was fixed. This was a consequence of Maxwell's research on electromagnetism in the late 19th C."
It's not enough, unless you explicitly insist that Maxwell's equations are invariant under coordinate transforms. This step is so obvious from the modern perspective that it's easy to overlook (the grandparent comment is insightful!) But pre-modern physicists thought it was the opposite which was intuitively obvious: that the constant "c" in Maxwell's equations implied the existence of a fixed frame in which solely those equations were valid. That Maxwell's equations should "obviously" transform in some different way in which c is additive, like the speed of sound.
Sure; the Lorentz coordinate transformation falls directly out of Maxwell's equations—if, only if, you add that particular coordinate-invariance condition.
The speed of light in maxwell's eqns is non-relativistic. That is itself basically just the insight and founding principle of special relativity. The rest is just to work out how to revise relativity to account for non-relativistic velocities -- and this is fairly trivial at the special relativity level.
More novel, from einstein, was GR but many insights here can be found "in the air" amongst professional physicists, in particular you could see GR as following from a formal elaboration of Mach's principle
(itself, in many ways, just a reply to a thought experiment Netwon had made centuries earlier)
The point is that "non-crank" physics doesnt have the crank-like "moments of genuis" that people assume. It isnt just plucking ideas from the air. Those ideas are heavily embeded in thousands of years of reflection
But special relativity? Yes, Henri Poincare had already come up with a mathematically equivalent theory to special relativity. Einstein's is a much simpler theory by all means, but the original commenter talking about it like it came out of no where and people must have thought Einstein was crazy or lost his mind, that's just a gross misunderstanding of history.
If anything, Einstein's theory of special relativity gained acceptance rather quickly because it basically converged to where physics was already heading.
> We always have, and probably always will have, an arrogance of the present. It's explained by a tautology - what we think is right, is what we think is right.
I think it's culture + semantics/semiotics. I see it occurring most frequently in the form "What is true, is what we believe (~'consensus') is true" (the word belief is usually not explicit though).
In practice, the word "is" has multiple meanings, one of them being "[It is my opinion that] X is true", and pointing out the presence of the implicit opinion part is culturally unacceptable, depending on the scenario. It's a lack of cultural distinction between the universe and reality, they are typically conflated in this era, in our geographical region. Personally, I think LLM's offer some compelling insight into the process underlying this phenomenon, and I bet this can be proven out.
I agree it's the scientists who take us to weird places that advance the needle (and something that's discouraged in academia, but that's a different conversation).
However there's a big difference between proposing something exceptionally ridiculous and fraud leading to retraction
Couldn't is be that what makes (some) Nobel-prize-worthy discoveries possible is an openness with respect to "crazy" ideas, of which some small amount actually turn out to be right, and thus lead to Nobel prizes?
So, I think these Nobel prize winners just continue as before: thinking about and exploring "crazy" ideas that might have a huge impact if they turned out to be right. It's just that the next times, it will be the very common situation that these crazy ideas won't be right?
I went deep down the Pauling Vitamin C rabbit hole once and got to point out that many studies not seeing effects are not actually giving people gram-megadoses, but mg-homeopathic ones. This story might not be as closed as some podcasters and other influencers pretend. Maybe it’s not double nobel laureate Pauling who was so wrong that he has pretty much become a quack in popular knowledge, but the fields of nutrition and perhaps medical science that are shoddy.
In fact nutrition and medical science are quite well known to be some of the worst offenders when it comes to bad methods and scientific misconduct, particularly in the past few decades (as also shown by OPs link).
Also - like apparently many of those perpetuating the story - I got my initial opinion about the Vitamin C topic being quackery from Wikipedia, but know better now not to generally trust it for medical topics since it’s quite well known that marketing departments of the pharmaceutical industry have a lot of time on their hands to write articles that benefit them. I personally burned myself with a safe&effective&local medical product promoted there with scientifically sounding terms with all criticism erased or “debunked” around 2014 and have permanent eye damage now (there has been a class action lawsuit few years later).
I would say these people generally like to follow wherever curiosity leads them, without giving much thought about peer opinion, which is why they are the ones winning prizes for revolutionizing discoveries. They are freer to do so once the price is in and this may lead them any direction.
A milligram dose is not usually considered homeopathic. A 6C homeopathic dilution, which is on the less potent end of homeopathic medicines, is 1:10^12.
Now, a gram vs. milligram would be between 2C and 3C (or 3X on the decimal scale) so it can be described on a homeopathic scale, but then again, full-strength of 0X or 0C can also be on that scale so I don't think this is the interpretation you mean.
FWIW, "Mega-dose vitamin C in treatment of the common cold: a randomised controlled trial" at https://onlinelibrary.wiley.com/doi/abs/10.5694/j.1326-5377.... uses 1g and 3g doses and refers to the 30mg dose as "placebo". It's apparently hard to get the taste right with truly homeopathic doses.
> many studies not seeing effects are not actually giving people gram-megadoses
Pauling is seen as a quack when it comes to vitamin C because he claimed it could help treat all sort of things; for colds, for cancer, for AIDS treatment, for asthma, for mononucleosis, and for far, far more, as he uncritically lists every single positive connection to vitamin C in his 1987 book "How to live longer and feel better". https://archive.org/search?query=%22How+to+Live+Longer+and+F...
That makes it hard to know what studies you refer to, and certainly there are studies which did use gram-megadoses and failed to replicate or find support Pauling's findings, like:
Anderson TW et al, in "Vitamin C and the common cold: a double-blind trial", Canadian Medical Association Journal (1972) used 1-gram megadoses, with 4-gram megadoses at the onset of colds. "It was found that in terms of the average number of colds and days of sickness per subject the vitamin group experienced less illness than the placebo group, but the differences were smaller than have been claimed and were statistically not significant". https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1940935/pdf/can...
Pauling's original study, with a smaller population and shorter time frame, suggested the benefits were far more significant, so should have been visible in that Canadian study, and that's only one of several >= 1 gram studies.
> Maybe it’s not double nobel laureate Pauling who was so wrong that he has pretty much become a quack in popular knowledge, but the fields of nutrition and perhaps medical science that are shoddy.
Do you think double-Nobel-Prize-winner Pauling was right that megadoses (10g) of vitamin C could treat cancer?
Oh, but wait - Pauling then said vitamin C only worked on cancer patients who had not received already had chemotherapy.
Nope, still not the case: "High-Dose Vitamin C versus Placebo in the Treatment of Patients with Advanced Cancer Who Have Had No Prior Chemotherapy — A Randomized Double-Blind Comparison", https://www.nejm.org/doi/full/10.1056/nejm198501173120301 with 10 g daily.
When should other people stop listening to Pauling's claims?
So Pauling and Cameron did their own study to support the claim ... which was criticized for the lack of blinding and poor selection of controls. (ibid).
> it’s quite well known that marketing departments of the pharmaceutical industry have a lot of time on their hands to write those articles that benefit them
It's quite well known that people in the pharmaceutical industry, and their friends and family, also get colds, cancer, and more. Do you really think they are hiding a cure from their co-workers, friends, and family?
It's also well-known that the supplement industry makes billions of dollars, among other things, from actual homeopathy, and this money gives them lots of time to write those articles that benefit them.
> Pauling is seen as a quack when it comes to vitamin C
Pauling has been portrayed as a nobel-laureate-gone-quack / example of "Nobel disease" in pop science media many times. Often by gullible people who are not actually scientists but more science(tm) promoters like podcasters. They don't usually limit it to the Vitamin C advocacy, but seem to like telling a story of a highly intelligent person "gone quack".
> "Mega-dose vitamin C in treatment of the common cold: a randomised controlled trial"
I don't have access to this but will check later.
> Anderson TW et al, in "Vitamin C and the common cold: a double-blind trial",
I've seen this before and quickly checked again (don't remember everything). They describe and show in Tables II and III a 30% marked-as-significant reduction in confinement to house, so apparently the severity of relevant cold symptoms is indeed strongly decreased.
They say themselves that Pauling based his claims on studies showing 45% and 60% respectively (which you have not linked to for some reason). Even 30% is still well over the significance-and-usefulness threshold in my eyes at least, particularly if it comes from a study quite open about intending to "debunk" the perceived quackery. I would figure that the real number is somewhere in-between the advocates and these "debunkers".
(Btw, in the discussion they made a weird 70s-boomer claim that consuming four ounces of vegetable and fruit juice per day is sufficient to prevent Vitamin C deficiency, and that 30mg Vitamin C per day is the basic requirement. Also, that they could not (or did not try to) really eradicate the confound of other health measures like other supplements taken is one of the typical problems with nutritional studies.)
About the cancer claims (which I agree should be treated with utmost suspicion, just like any cancer treatment): While I don't have strong stakes in the game to either support or "debunk" it, less than 75 days does seem to be on the short side for a serious disease for me. I wonder how this compares to other cancer medication studies with more profit in the game.
> It's also well-known that the supplement industry makes billions of dollars, among other things, from actual homeopathy, and this money gives them lots of time to write those articles that benefit them.
The supplements industry spends much more of that time and money on fake reviews on Amazon and other social media marketing. Seems to be much more effective for their consumer base than a long-form wikipedia article. Since the profits of the pharmaceutical industry for newly developed patented products are much higher, I guess they have some more money on their hands to hire a few "scientists" to write convincing-sounding long-form articles for pay.
> It's quite well known that people in the pharmaceutical industry, and their friends and family, also get colds, cancer, and more. Do you really think they are hiding a cure from their co-workers, friends, and family?
People in the pharmaceutical industry's marketing departments are hired and paid for writing supportive articles. I bet they are also made believe that they are doing the right thing(tm).
I first noticed suspiciously professionally written "debunk" Wikipedia articles during the Séralini affair - whatever you want to think about it; the time frame in which highly polished professional articles popped up was remarkable. Even otherwise pro-industry centrist-conservative media here got suspicious and wrote about potentially industry-written wikipedia articles.
If you need any further convincing who writes WP when money is in the game, check out the article about the "well-recognized musician" Justin Bieber.
I'm not sure why this overall topic triggers so much. A bit more on topic again: Science had very obvious issues with information overload for long time and unfortunately this allows bad players and COIs to exploit the system (recommended reading, also for OPs link: "Science Fictions" by Stuart Ritchie). These developments are out in the open for everyone who is willing to see them. Particularly in recent years. This needs fixing, seriously; and I hope you see it and agree with this, and think along for solutions (as the solution will most likely be technological).
(The "homeopathic dosage" wasn't meant literally.)
> Pauling has been portrayed as a nobel-laureate-gone-quack
Oh, certainly. I wanted to emphasize that the quackery was only in regards to his views on vitamin C.
> in pop science media many times
To be clear, it's also told in the scientific literature by people with chemistry and medical training.
> so apparently the severity of relevant cold symptoms is indeed strongly decreased
My previous comment wasn't meant to be conclusive about the experiments that have been carried out, but to point out how experiments using >= 1 gram doses have been done.
I did this because you criticized studies which used sub-gram doses. I have no problems with that viewpoint, but since >= 1 gram studies exist, you surely need to address their conclusions.
I pointed to the Canadian paper because it was the earliest one I could find to investigate Pauling's claim.
> so I would figure that the real number is somewhere in-between the advocates and these "debunkers".
That's not how statistics works!
Yes, that paper shows that a couple of the findings were statistically significant ... but also remember the XKCD 882 on "green jelly beans linked to acne" https://www.explainxkcd.com/wiki/index.php/882:_Significant - if you test enough random data, you'll find statistical correlations due to random happenstance.
You need to run the experiment again and see if the signal is still present, otherwise you're chasing statistical noise.
Which the Canadians did the next year. See "Vitamin C and the common cold:
a double-blind trial" at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1947567/pdf/can... where figure 1 shows no statistical difference in "days indoors" for 0.25, 1.0, and 2.0 grams taken prophylactically (or 4g and 8g taken therapeutically), contra Pauling's suggestion "on theoretical grounds that the beneficial effects of regular vitamin C supplementation should be proportional to the size of the daily dose."
So that's another >= 1 gram study, from the same people. From what I can tell from other summaries on the topic, there were dozens of these >= 1 gram studies by the 1990s or so.
Surely these are more relevant to your rabbit-hole exploration than the sub-milligram studies you complained about, yes?
> They say themselves that Pauling based his claims on studies showing 45% and 60% respectively (which you have not linked to for some reason).
I did not link to them because I wanted to understand your objections to the attempts at replication using >=1 gram of vitamin C/day, given that I know they exist, as I showed by demonstration. Both you and I know those publications exist.
Here's what I think is a copy of the original 1961 publication by Ritzel, https://www.mv.helsinki.fi/home/hemila/CC/Ritzel_1961_ch.pdf , "Kritische Beurteilung des Vitamins C als Prophylacticum und Therapeuticum der Erkältungskrankheiten". I lack the German to read it, but Tabelle 1 doesn't seem to match Tabelle 2? That is, Tabelle 1 says the number of krankheitstage ("sick days") is 31 for the vitamin C group and 80 for the placebo group, but Tabelle 2, with the individual breakdown, says there were 42 krankheitstage and
"He reported a reduction of 61 percent in the number of days of illness from upper respiratory infections and a reduction of 65 percent in the incidence of individual symptoms in the vitamin C group as compared with the placebo group."
That 61% is computed from Tabelle 1, "Anzahl Krankheitstage" at 31 days vs. 80 days for the placebo group. (80-31)/80 = 0.6125. But oddly, Tabelle 2's columns for "Anzahl Krankheitstage" add up to 42 and 119, respectively, so I think Tabelle 1 swapped the value for Krankheitstage and Einzelsymptome?!?! (Both are ~60% reduction, so this doesn't affect the conclusion.)
Also, note the short time - the vitamin C was only delivered for two 5-day ski camps, and those with cold symptoms on the first day were excluded from the study. If 60% were replicable, and not a statistical fluke or flaw in the methodology, it should be easy to see in other studies.
Note also that earlier on page 44 Pauling says that 200mg to students as a girls school in Ireland (it look like teenagers) reduced the severity and duration of colds ("Duration of the symptoms in catarrhal colds was reduced from 14 days to 8 days in the children receiving ascorbic acid."), so at this point he believes there is supporting evidence that 200mg (or perhaps 500mg for adult body weight?) is enough to show a noticeable effect.
"Nothing can therefore be concluded about the relationship of ascorbic acid metabolism and the
appearance of cold symptoms in the different experimental groups from the results of this trial. ... by modern standards of clinical trial methodology [Tyrrell's trials] could not be
classified as a well conducted clinical trial on the relationship between development of the clinical features of the common cold and the administration of supplementary vitamin C ... Dr. Pauling did not provide this critical evidence necessary for support of his hypothesis about the relationship between the administration of supplementary vitamin C and reduction of the symptoms of the common cold" - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1795409/pdf/brm...
> About the cancer claims (which I agree should be treated with utmost suspicion, as for any medicine):
Does his "double nobel laureate" affect your interpretation of his views on vitamin C and colds more than your interpretation of his views on vitamin C and cancer?
Anyway, if Pauling has stopped with Vitamin C and the common cold, I don't think he would have been seen a quack. Eccentric, sure, but was his doubling down to say it also treats cancer, and his tripling down to promotes how it may treat a host of other diseases, which made him a nobel-laureate-gone-quack.
> People in the pharmaceutical industry's marketing departments
Okay, think back to the 1970s when Pauling promoted vitamin C for the treatment of the common cold.
What treatments did people use for the treatment of the common cold? Aspirin was generic. acetaminophen (Tylenol) was generic. Ibuprofen was under patent, but competed against the first two. I think pseudoephedrine was generic? ("first characterized in 1889" says Wikipedia).
I don't see much in the way of patent profits there .. and I still don't know.
Who produced and sold vitamin C? (Hint: a paper I mentioned earlier end "We are grateful to Dr. J. Y. Gareau of Hoffmann-La Roche Ltd. for supplying the vitamins and placebo tablets.").
So, what did the pharma industry have to lose by promoting 1g consumption of vitamin C every day, for prophylactic use against the common cold?
Why would pharma profits prevent the military from using an effective cold treatment?
Why would pharma profits prevent the Soviets from using an effective cold treatment?
Why would pharma profits prevent the national school systems in Europe from providing vitamin C megadoses to their students?
> I first noticed suspiciously professionally written "debunk" Wikipedia articles during the Séralini affair
I have no idea what that means. My initial understanding of Pauling's megadose idea predates the existence of Wikipedia. Séralini seems to be a post-2010 thing? How does that affect any of the earlier published experimental results showing that megadoses don't have the clear effect that Pauling claimed, and that megadoses come with some side-effects.
> I'm not sure why this overall topic triggers so much.
Because there's oodles of published research over the last 50 years showing Pauling's ideas don't work like Pauling saying it would? Do you really want to say "pharam conspiracy" for something so easily tested that groups around the world evaluated the possibility? And that people like you can conduct yourself?
Who stands to profit from promoting Pauling's megadose hypothesis if it is actually false? And who promotes that idea the most?
Eugenics are "merely" deeply political incorrect/blasphemic/unethical, but not a wrong crackpot theory per se. Obviously what works for plants and animals would work for humans too.
"Eugenics" is not simply a synonym for the selective breeding of humans, but for a certain type of selective breeding - one that depends very much on who gets to decide what "good" means in the Greek prefix "eu-".
For example, people do not use the term "eugenics" to describe the unintentional selective breeding of humans who maintain the childhood ability to produce lactase and thus can digest milk as an adult.
The term is also tainted by a long history of proponents and advocates wanting to apply the principles of selective breeding to non-inheritable traits, like "criminals", due to a naive understanding of genetics (eg, https://en.wikipedia.org/wiki/Eugenics#Objections_to_scienti...).
The phrase "what works for plants and animals" seems to be hiding a lot under the covers, like the number of gene lines killed off because they weren't profitable enough for the people doing the breeding. Even in domesticated plants there is an ongoing debate about decline and extinction of heirloom/landrace varieties, and consequential loss of genetic diversity, in the face of "what works" for industrialized agriculture in short-term/decade time-scales.
The "new eugenics" movement emphasizes how choice should be left to the parents, as an alternative to the history of the state deciding these matters. However, if we really are looking at "what works for plants and animals", we can see that optimizing for an individual doesn't always result in what benefits the group.
For example, you can breed an aggressive chicken, more willing to attack other chickens and take their food. The aggressive chicken will be bigger and heavier than the others, but if they are all that aggressive then they end up fighting each other, to the overall detriment of all.
Some of these options will result in a Prisoner's Dilemma, where if some parents choose trait X for their children than those children benefit (as defined by both the parents and children), but if all parents choose X then things becomes worse (again, as defined by both the parents and children.)
Which leads us back to how politics and ethics are inherently connected to the "eu-"/"good" in the deliberate selective breeding of humans, and not simply "merely" connected.
In many ways though, I think it's a strength that such a prize cannot be revoked, because it emphasizes the zeitgeist of a time. I think it can help provide not only some humility to us in a different era, but also perhaps help us learn more from the past.
As an interesting factoid, lobotomy was practiced at all levels of society at the time. Even JFK's sister was secretly lobotomized [1]. It was covered up for years as it went about as well as one would expect a lobotomy to go.
What a story, I never knew about this. Again and again these public families with public images doing the most toxic things behind closed doors. Thanks for sharing this.
Only half joking, but have you ever had the issue on a Mac where you copy something and it randomly fails to copy, so when you paste, it pastes the previous copy?
Or if you have 20 files in a folder called IMG0887.jpg, IMG0897.jpg, etc, and you need to open them one after the other with an open file dialog, easy to pick the wrong one. A scientist isn't necessarily IT smart, particularly in medicine. Not saying it's what happened though.
Hence I'm hammering in the importance of good data handling and clearly identified file structure to the (chemistry) grad students working in the lab with me
No it's not just you. There was an Apple ticket with thousands of 'me too', but they said it does not exist and deleted the whole thing about 10 years ago.
So my assumption is the figures must be vital data filled figures. If they're just diagramatic ones (fig 1 type) that are similar amongst articles from the same group on thr same line of work then it's probably plagiarism by the letter of the law but not the spirit of the law.
Publish or perish and other academic pressures are hardly unique to for profit or even non profit universities, even Government institutions have similar incentive models to get tenure or other advancements, it seems pretty universal ?
Do you know of a study or citation about any academic research quality/quantity challenges that are found at a higher rate in for profit / non profit private institutions ?
He works at Johns Hopkins, which is a private university that works closely with a health system called Johns Hopkins and the Johns Hopkins hospital (the medical school). When you say "for profit", be aware that they definitely take money from the government and patients and use it to develop new intellectual property that generates large amounts of money for them (although I guess the "profit" from that doesn't benefit the shareholders).
These "non-profits" have multi-billion dollar budgets and are highly "profitable" depending on how you like to define it. https://www.nytimes.com/2020/02/20/opinion/nonprofit-hospita...
In these cases, the non-profits are given tax incentives provide a certain amount of community benefit.
For what it's worth, I sort of read that as "universities incentivized to maximize profit or income", not literally formally for-profit universities. Sort of a biting remark about typical university motivations, not literal.
Even if the poster didn't intend it that way, I think it could easily be read that way.
This is true but a critical difference is that STEM is a lot more falsifiable.
Postmodern deconstruction lends itself to a million interpretations, but biological testing either works or doesn't, and inconclusive generally means it doesn't.
Keeping track of retractions is good, but I'm on the fence as to whether the community should be making it a shameful act (like committing a crime), or a positive one (like apologising, and admitting mistakes).
We want scientists to have incentives to not deliberately commit fraud, but also feel that they can rectify mistakes.
Having retraction be shameful might do the former, but will harm the latter.
I don't have an answer off the top of my head, this is just my observation on this.
Honorable retractions, as a result of honest errors, happen all the time in science and, while unfortunate, are not a huge deal.
Much different are the retractions based on research fraud. retractionwatch provides a valuable service in tracking all types of retractions, and in warning the community about unethical or predatory journals.
Retraction Watch provides information. They do an excellent job in following up on incidents and supplying the complete picture. It’s always clear from their reports whether an author was shady or simply made a mistake—and this is usually clear from the author’s behavior, as well.
Of course neither they nor anyone else can be held responsible for unjustified assumptions of the mob, except the members of the mob.
there are a lot of "bad science" problems and people tend to get them mixed up.
Here's a list of all the things I can think of, roughly ordered by my opinion of seriousness:
1. Fraudulent data in real publications in real journals, even sometimes the most prestigious journals like Science, Nature, Cell. (cases in article are this situation.) Worst of all is when it has a real impact on policy, or leads to bad investment in a new technology that can't work.
2. P-hacking, questionable research practices, etc. without fraud, by "legitimate" academics. This happens everywhere but seems to be a bigger problem in social sciences especially branches of psychology. Real-world impact has been less severe but does cause problems and wasted money sometimes. The worst impact is probably that it causes some young researchers to waste their careers on dead-end lines of research.
3. Really badly done studies to legitimate dubious supplements. Subcase of 2. I actually rate these as less severe, because the average consumer of dubious supplements is not reading these pointless studies, and supplements are basically unregulated, so I don't think getting rid of these studies would make much difference.
4. Completely fake garbage papers in paper mill journals. Not just fake data but outright total nonsense. These are ignored by most scientists, so less severe, but are a concern for the people who fund scientists, and they can cause real trouble if they end up in a meta-analysis.
I only write this list because so much of the discussion of fraud and bad research seems to conflate all of them together. But they're all distinct.
I had mentally accepted p-hacking and paper mill garbage, but seeing Nobel-prize-winning labs publish photoshopped blots in top journals has shaken my faith in our scientific institutions far more.
5. HARKing: "Hypothesis after results known" is also a problem to produce even "legitimate" papers by discarding the original hypothesis after negative findings and come up with a new one that fits the data. Its similar to p-hacking and also skews science into "promising" dead-ends.
> but seeing Nobel-prize-winning labs publish photoshopped blots
And they were caught because they edited poorly which could be detected automatically or by human eyes. Think about well versed image manipulators or generative AI...
I think this erosion of trust is the worst, AI can produce and it will propably come.
"A Nobel prize-winning researcher whose publications have come under scrutiny has retracted his 10th paper for issues with the data and images.
Gregg Semenza, a professor of genetic medicine and director of the vascular program at Johns Hopkins’ Institute for Cell Engineering in Baltimore, shared the 2019 Nobel prize in physiology or medicine for “discoveries of how cells sense and adapt to oxygen availability.”
He should be grilling his assistant profs / research fellows to get their act together and raise the bar, but this doesn't show malfeasance.