Regarding "literally everyone does it" (fishing for results, removing negative ones, and so on): it's very true, and much more prevalent than what people think. Even at top-level universities with big shots. Especially at top-level universities with big shots. I know someone, upon entering a lab to do electronic microscopy, who found out that the image data her lab had written multiple papers on very high-impact journals about (we're talking Nature/Science here) could have easily been generated by dead cells instead of living ones, i.e. they had been staring at artifacts for years to draw conclusions on protein content, mechanisms, and so on. Of course, upon learning this, her boss quickly set upon contacting the journals in question to issue an apology and retract the relevant papers, thanked her for noticing the mistake, and everything went back in order. Just kidding, he told her to shut up if she wanted to keep her job.
Interesting fact: the majority of the impact factor values of big-name journals (Nature, Science, Cell and co.) comes from a few select high-impact papers. Most of the rest goes relatively uncited. The reason is, most of these papers are not astounding but simply make it through the editorial system because the big shots in charge of the labs know the editors. Not that these papers themselves are devoid of merit (though I know my fair share of Nat/Sci/Cell papers that happen to be 80-100% bunk), but they wouldn't have made it to such highly selective journals otherwise. These journals also happen to have an usually high retraction rate. I have friends working in the aforementioned big-shot labs that are jaded to tte point that they just dismiss most thing coming out of Science.
And that's just my area and that of people I know. Things go very awry in medicine or anything related to cancer where the pressure is so high, the field so competitive, the scientific questions so complicated, the experiments so hard to reproduce and the incentives to publish so exacerbated (note that medical journals generally have the highest impact factors). And I can't even begin to imagine what goes on in psychology.
When I was in grad school I was doing graph theoretic analysis of human metabolic networks. I had some really interesting and promising preliminary results about the structure of various metabolic pathways and had been invited to present my unpublished work in a multi-departmental lecture. I was making some new visualizations for my talk and spotted something odd, which led me to a bug in my source code that had reversed the direction of several steps in many of the pathways. I was mortified . . . I ran some preliminary tests on the corrected code and saw that many of my results were completely in error. I wouldn't know for sure until I could get several days of cluster time, but I had proof that my last year of work was completely bogus.
I stayed up all night confirming the new code and the extent of my invalidated results and brought this to my advisor the morning before the lecture. He told me to shut up and present it anyways. Being a young idealist I argued with him about the nature of science and the search for truth . . . I refused to go along with it and present something I knew was false. He made one of the research scientists present on their topic at the last second so our lab wouldn't "lose face" and I was pretty much in the dog house forever after.
Papers, not really since I was out on the edge at the time and I haven't caught up with the field since then. Nothing I did ever made it into publication but it was pretty interesting. Mostly was focusing on the redundant connectivity of various metabolic pathways in healthy human cell lines and comparing them to the reduced pathways in cancer cells where mutations and large scale chromosomal loss restricted the ability to process and regulate several key compounds. The general idea was to look for pathways that were nearly always preserved even when cancer cells were extremely evolved/mutated from their original lines and map those to drugs that would target the enzymes mediating those reactions. Basically you'd try and knock out a weak link in the cancer metabolic network for which a healthy cell had significant redundancies.
A major conceptual difficulty was the somewhat nebulous definition of a metabolic pathway, beyond what reactions were drawn together in a textbook diagram. Also somewhat thorny was the concept of building a network with any sort of transitive property with a notion of mass conservation built in. I wrote about 30k lines of terrible Perl code exploring this stuff, if I had independent backing I would definitely dust it off and finish the project but it's pretty far in the rearview mirror at this point.
I had a similar experience. I worked in a neuroimaging lab during grad school. I found a mistake in our code that basically meant we had being overestimating the p-value from MRI data. It potentially invalidated many of the previous papers (and there were a lot, this was a "paper mill").
Being naive, I just assumed they'd be retracting them, but my labmates explained there was no way our PI would let that happen. It was never boiled up to the PI, so he never even knew. A year later (due to this as well as other issues), I quit the PhD and joined a company doing something "real".
Coming from neuroscience also, I can really relate to this. I'm just glad that people like Neuroskeptic are casting light on the problem. I was really surprised at the degree to which neuroscience is proof by brain graph with lit up blobs :/.
But the real question is: how else would you do it?
Having journals/conferences with things like double-blind peer-review processes in place to me is probably the only way to do this. How else would you filter all the noise?
We don't have this in industry and it's a bit sad. I mean we do in the form of some blogs, conferences, etc. But in no way it compares to the system academia has. Have you ever tried publishing to an academic conference compared to an "industry conference"? All you need for most industry-conferences is an abstract, an idea, not even evidence of new sorts; actually you need reputation but that's a different issue.
There are flaws in every system for sure and academia has their hands full of them especially since they have to deal with quite important things like medicine.
But I find it funny that people complain about academia, and yet there's no other system that comes close to producing empirical truths.
You're right, of course. I mostly complain because it feels cathartic to rant, but I am well aware that it's probably the same in the private sector, the government, etc. That's just human structures I guess.
As for designing a better system that's more robust to these flaws, it is a hard problem. There are structural issues when trying to apply traditional scientific virtues to the real world (how do you reasonably ensure reproducibility with a ridiculously expensive experiment, or one spannig decades, or one involving cohorts of tens of thousands of people? How do you ensure peer review is honest and comprehensive when there are exactly three labs specialized in a subject, and they're all competitors? Or even collaborators?), even if you assumed that editorial bias (toward big names, against negative results, toward trendy subjects, etc.), dishonesty and nepotism didn't exist. Plus there's a wide system of power-wielding stakeholders with huge incentives to maintain the status quo: established big shots, publishers (cough cough Elsevier cough cough), and so on.
So yeah, things are broken but we're all trudging along trying to enjoy what we do and make do with what we have. Who knows, things may even get fixed one day.
While I have a similar disillusion with the academia, I think sometimes people misinterpret the difficulty with reproducibility, especially on a field they are not familiar with. It seems to me it's imprudent and a bit disrespectful to jump to the gun when a certain research fail to get reproduced, especially when it comes to biology, because the experimental procedure can be incredibly complex and prone to all kinds of errors. I know of cases where even within a group only one person can successfully perform a certain protocol with a good enough success rate and it takes a lot of learning to get on their level. Without expertise in the field and knowledge of how the reproduction is done, it's really hard to examine all the data and judge the integrity of the science behind it.
To me the much bigger problem is how so much of the research is warped by what gets funding, fast.
Wrong result or no result is result too. With help of those who tried to reproduce and failed, it's easier to find missing gaps in knowledge and fill them. Otherwise this knowledge will be lost.
The issue I see with this is that it implies anyone who isn't doing work high-profile enough to get independent labs to try to reproduce it isn't doing science.
In my view it should be a judgment call whether a particular result needs to be replicated. I'm aware of the so called replication crisis. But adding more arbitrary rules will simply invite people to figure out how to game those rules, if they were gaming the original rules (deliberately or unwittingly).
Yes, this seems like a much less wasteful, and possibly more fruitful approach. Rather than running identical experiments, one can perform experiments that test different but overlapping aspects of the underlying theory. I suspect this kind of "replication" is done all the time in physics, which has led to a robust knowledge base in spite of less than ideal rigor within any individual experiment.
The more the better. You'd really need a statistically significant number of labs repeating it to be sure. If the underlying study is scientifically sound then a single repeat is good, two repeats is better. 20 repeats is even better. Good luck getting 1 or 2, unless the study is an important work that introduces a useful new technique or is the basis for additional studies. Also good luck knowing whether 2 labs failed for every 1 published. Science is hard. Reporting needs to be improved and there should be grants available to fund repeat studies.
It's easy to keep secret when you only one who knows truth. It's also easy for two, because you immediately know who to blame if secret is revealed, so you can keep pressure. But situation changes for three, because you cannot know who revealed secret, so it's much easier to crack.
Imagine, I will report today that I'm able to determine temperature of the physical vacuum (e.g. using double slit experiment), and then able to drop it, i.e. cool vacuum below absolute zero. Will you trust me? Of course, not. Then when you will trust me?
If two independent labs can reproduce the result it is extremely unlikely to be the result of false positives. The difference between having two versus three independent verifications is minuscule. A standard has to be set and 2 is a good enough standard.
Interesting fact: the majority of the impact factor values of big-name journals (Nature, Science, Cell and co.) comes from a few select high-impact papers. Most of the rest goes relatively uncited. The reason is, most of these papers are not astounding but simply make it through the editorial system because the big shots in charge of the labs know the editors. Not that these papers themselves are devoid of merit (though I know my fair share of Nat/Sci/Cell papers that happen to be 80-100% bunk), but they wouldn't have made it to such highly selective journals otherwise. These journals also happen to have an usually high retraction rate. I have friends working in the aforementioned big-shot labs that are jaded to tte point that they just dismiss most thing coming out of Science.
And that's just my area and that of people I know. Things go very awry in medicine or anything related to cancer where the pressure is so high, the field so competitive, the scientific questions so complicated, the experiments so hard to reproduce and the incentives to publish so exacerbated (note that medical journals generally have the highest impact factors). And I can't even begin to imagine what goes on in psychology.
Plus, the pay is lousy.