Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a hard time believing that the issue is a dilution in the “quality” of scientists, but I would agree that ever-increasing competition for funds and jobs has produced some perverse incentives.

The consequences for publishing something that’s wrong but not obviously indefensible are often pretty low. On average, it probably just languishes, uncited, in a dusty corner of PubMed. It might even pick up a few citations (“but see XYZ et al. 2019”) that help the stupid metrics used to evaluate scientists.

The consequences of working slowly—-or not publishing at all—- are a lot worse. You get scooped by competitors that cut corners, and there’s not a lot of recognition for “we found pretty much what they did, but did it right.” Your apparent unproductivity gets called out in grant reviews and when job hunting. The increasing pace and career stage limits (no more than X years in grad school, Y as a postdoc, Z to quality for this funding) make it hard to build up a reputation as a slow-but-careful scientist.

These are not insoluble problems, but they need top-down changes from the folks who “made it” under the current system....



The replication crisis that's plaguing much of the social sciences, but especially psychology, did not cherry pick studies. It started with an effort to replicate studies only from high impact well regarded journals in psychology. [1] It found that 64% of the studies could not be replicated, leading to the curious outcome that if you assumed the literal and exact opposite of what you read in psychology (e.g. - what is said to be statistically significant, is not) - you would tend to be substantially more accurately informed than those who believe the 'science.' [1]

But more to our discussion, two of the journals from which studies were chosen were Psychological Science - impact factor 6.128, and the Journal of Personality and Social Psychology - impact factor 5.733. The replication success rate for those journals was 38% and 23% respectively. I'm certain you know, but impact factor is the yearly average number of citations for each article published in a journal. A high impact factor is generally anything above about 2. These are among the crème de la crème of psychology, and they're worthless.

As you mention pubmed, preclinical research is also a field with just an absolutely abysmal replication rate. And once again these are not cherry picked. In an internal replication study Amgen, one of the world's largest biotech companies, alongside researchers from MD Anderson, one of the world's premier cancer hospitals, were only able to replicate 11% of landmark hematology and oncology papers. [2] Needless to say those papers, and their now unsupported conclusions, were acted upon in some cases.

-----

All that said I do completely agree with you that the current system of publish or perish is playing into this, but your characterization of the current state of bad science is inaccurate. Bad science is becoming ubiquitous. However, I'm not as optimistic that there is any clean solution. There are currently about 400 players in the NBA. If you increased that 4,000 what would you expect to happen to the mean quality and the lowest common denominators? Suddenly somebody who would normally not even make it into the NBA is a first round pick. And science is a skill like any other that relies on outliers to drive it forward. We now have a system that's mostly just shoveling people through it and outputting 'scientists' for commercial gain. The output of this system is, in my opinion, fundamentally harming the entire system of science and education. And this is a downward spiral because now these individuals of overall lower quality are working as the mentors and advisers for the next generation of scientists, and actively 'educating' the current generation of doe-eyed students. This is something that will get worse, not better, over time.

[1] - https://en.wikipedia.org/wiki/Replication_crisis#Psychology_...

[2] - https://www.taconic.com/taconic-insights/quality/replication...

[3] - http://graphics8.nytimes.com/packages/pdf/education/harvarde...


Speaking of replication, my personal experience in very narrow field of audio DSP which is easy to test gave results of 9 papers impossible to implement mostly due to missing key details, 6 more where results only apply in specific test signals (total failure in reality), 3 where performance was overstated by over 12 dB in real samples. 8 were really good and detailed. Two had actual used test code available, one has it in printed form. None with the code were any good. :D

(IEEE database around 2005 in noise reduction, echo cancellation and speaker separation or detection.)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: