Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You'd expect that yes, but weirdly, you'd be wrong. Journal impact factor and likelihood of replication don't seem to correlate:

https://fantasticanachronism.com/2020/09/11/whats-wrong-with...

Studies that don't replicate are cited at the same rate as those that do:

https://fantasticanachronism.com/2020/09/11/whats-wrong-with...

This year I've read a lot of epidemiology papers, and sometimes their peer reviews. There's something deeply wrong with peer reviewers in this field because they often write long, detailed reviews that completely ignore glaring problems in the papers, problems that jump out to 'lay' readers on the first glance through. My guess is that there are very complicated sets of unwritten rules about the sorts of problems that are and are not legitimate to criticise in peer reviews, and problems that are a little bit too fundamental, of the form "this entire paper should be rejected out of hand", don't get given when a paper has 25 authors at prestigious universities, even if the methodology or conclusions are absurd.



Ok, but I think your examples are specific to the social sciences where the method used is not sufficiently close to the scientific method to be reliable enough. Hence you're likely to see systemic biases in junior and senior authors alike as the field may not converge to "truth".

But in fields that the method is closer to science (e.g. physics, chemistry, neuroscience), I would expect that the overall field is converging to the truth and that senior authors will therefore be more tuned in to the best estimate of truth or how to get to it than junior authors.


Yeah, but do you have a formal list of academic fields labeled as scientific by media/government but which are not actually scientific? The term "social science" doesn't cover it, as epidemiology is proving. Not many would call it a social science but the problems there are identical or frankly even worse. And what of climatology, another field where people construct complex models on relatively small datasets and can't do even small scale experiments? Is that also a social science? Clearly not.

Even in microbiology there are a huge number of papers that don't replicate.

To me it looks like the problems are general. They aren't restricted to a small set of social sciences.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: