Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think we need to foster a culture of honesty and rigor. Of good science. Which is decidedly different from fostering a culture of "repetition" for its own sake.

That is not sufficient. Honesty and rigor is of course required for good science, but it is not sufficient.

Even if honesty and rigor you WILL still get false positives. Statistics is used to measure how likely this is. Statistical methods do nothing to tell you if any particular case happens to be a false positive. For many studies a confidence of 95% is considered good enough to publish, but if you do the math that means a honest researcher who publishes 20 such studies has probably published one false result! If there are 20 studies published in a journal statistically one is false. Thus replication is important.

It gets worse though, the unexpected is published more often - if it is true it means a major change to our current theories and this is important to published. However our theories have been created and refined over many years, it is somewhat unlikely they are wrong (but not impossible). Or to put it a different way, if I carefully drop big and small rocks off the leaning town of Pisa and measure that the large rock "falls faster" that is more likely to be published than if I found they fell at the same speed. I think most of us would be suspicious of that result, but after examining my "methods and data" will not show any mistake I made. Most science is in areas where wrong results are not so obvious.

> It's impossible to re-verify every single paper you read

True, but somebody needs to re-verify every paper. It need not be you personally, but someone needs to. Meta-analysis only works if people re-verify every paper. Note that you don't need to do the exact experiment, verifying results with a different experiment design is probably more useful than repeating the exact same experiment: it might by chance remove a design factors that we don't even know we need to account for yet.

> And I'm pretty sure literally no scientist takes a paper's own description of its results at face value without reading through methods and looking at (at least) a summary of the data.

I hope not, but even if they check out, it doesn't follow that things are correct. Maybe something wasn't calibrated correctly and that wasn't noticed. Maybe there are other random factors nobody knows to account for today.

The above all assumes that good science is possible. In medical fields you may only have a few case studies: several different doctors saw different people each with some rare disease, tried some treatment and got some result. There is no control, no blinding, and a sample size of 1. But it is a rare disease so you cannot do better.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: