Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> we need to foster a culture that a) accepts repetition

Do we?

I don't think we do. I think we need to foster a culture of honesty and rigor. Of good science. Which is decidedly different from fostering a culture of "repetition" for its own sake.

Paying for the cost of mountains upon mountains of lab techs and materials that it would require to replicate every study published in a major journal just isn't a good use of ever-dwindling science dollars. Replicate where it's not far off the critical path. Replicate where the study is going to have a profound effect on the direction of research in several labs. But don't just replicate because "science!"

In fact, one could argue that the increased strain on funding sources introduced by the huge cost of reproducing a bunch of stuff would increase the cut-throat culture of science and thereby decrease the scientist's natural proclivity toward honesty.

> and b) does not accept something for a fact just because it's in a journal

Again, it's entirely unclear what you mean here.

It's impossible to re-verify every single paper you read (I've read three since breakfast). That would be like re-writing every single line of code of every dependency you pull into a project.

And I'm pretty sure literally no scientist takes a paper's own description of its results at face value without reading through methods and looking at (at least) a summary of the data.

Taking papers at face value is really only a problem in science reporting and at (very) sub-par institutions/venues.

I don't care about the latter, and neither should you.

WRT the former, science reporters often grossly misunderstand the paper anyways. All the good reproducible science in the world is of zero help if science reporters are going to bastardize the results beyond recognition anyways...



> I don't think we do. I think we need to foster a culture of honesty and rigor. Of good science. Which is decidedly different from fostering a culture of "repetition" for its own sake.

No one is proposing repetition for its own sake. The point of repetition is to create rigor, and you can't do rigorous science without repetition.

> Paying for the cost of mountains upon mountains of lab techs and materials that it would require to replicate every study published in a major journal just isn't a good use of ever-dwindling science dollars. Replicate where it's not far off the critical path. Replicate where the study is going to have a profound effect on the direction of research in several labs. But don't just replicate because "science!"

I could see a valid argument for only doing science that will be worth replicating, because if you don't bother to replicate you aren't really proving anything.


I could see a valid argument for only doing science that will be worth replicating, because if you don't bother to replicate you aren't really proving anything.

Exactly. A lot of the science I've done should not be replicated. If someone told me they wanted to replicate it, I would urge them not to. Not because I have something to hide. But because some other lab did something strictly superior that should be replicated instead. Or because the experiment asked the wrong questions. Or because the experiment itself could be pretty easily re-designed to avoid some pretty major threats.

The problem is that is that hindsight really is 20/20. It's kind of impossible to ONLY do good science. So it's important to have the facility to recognize when science (including your own) isn't good -- or is good but not as good as something else -- and is therefore not worth replicating.

I guess the two key insights are:

1. Not all science is worth replicating (either because it's too expensive or for some other reason).

2. Replication doesn't necessarily reduce doubt (particularly in the case of poorly designed experiments, or when the experiment asks the wrong questions).


This is a really good post which contributes to the conversation. Why make it on a throwaway account? We need more of this here!


>I don't think we do. I think we need to foster a culture of honesty and rigor.

Foster all you want, an honor system doesn't protect you from the incompetent people and dishonest people publishing junk for funding or self-promotion. If we had a culture if repetition it promotes cross checks that make up for the flaws in human nature.


Trust but confirm.

The entire point of science, and this is not a hyperbole, is that results are reproducible. If the experiment is not reproducible one must take the results on faith. There is no such thing as faith based science.

In order to build a shared body of knowledge based on scientific facts, then, results must be repeated. It is how different people can talk about the same thing, without fearing an asymmetry of knowledge and understanding about the axioms on which their discussion of the world rests. Otherwise it is faith or religion or narrative, something other than than science.


> The entire point of science, and this is not a hyperbole, is that results are reproducible.

No, it's not. The point of science -- its end -- is to understand the natural world. Or to cure diseases. Or, more cynically, to learn how build big bombs and more manipulative adverts.

Reproducible results are the means, not the end.

I know that seems like hair splitting, but it's important. Epistemological purity can do just as much harm as good, because even the most pure science is usually motivated more by "understand the natural world" or "improving our understanding of some relevant mathematical abstraction", rather than by episemological purity itself.

To be quite honest about it, I feel that this sort of epistemological purity that insists on reproducability as a good in itself feels a lot like some sort of legalistic religion.

> If the experiment is not reproducible one must take the results on faith. There is no such thing as faith based science.

I don't think I (or anyone here) is arguing against this. Or against reproducing important experiments.

I'm wholly supportive of reproducing results when it makes sense. But I'm also wary of, in a resource-constrained environment, prefering reproducing results over producing good science in the first place.

To be concrete about it, I'll always prefer a single (set of) instance(s) of a well-designed and expertly executed experiment over 10 reproductions of a crappy experiment. In the former case I at least know what I don't know. In the latter case, the data from the experiment -- no matter how many times it's reproduced -- might be impossible to interpret in anything approach a useful way.

Put simply, a lot of science isn't worth the effort of reproducing. Either because it's crap science, or because the cost of reproducing is too high and the documentation/oversight of the protocol sufficiently rigorous.

The point of science isn't an to perfectly adhere to the legalistic tradition of a Baconian religion. The point of science is learn things.


The only way we "understand" the natural world is by making verifiable predictions. If those predictions can't be consistently verified, then we don't understand the relevant phenomenon at all, and we haven't learned anything.

> To be concrete about it, I'll always prefer a single (set of) instance(s) of a well-designed and expertly executed experiment over 10 reproductions of a crappy experiment.

I'd take 2-3 repetitions of a moderately well-designed and moderately executed experiments over either. Even the most well-designed and executed experimental protocols can produce spurious results, due to the stochastic nature of the universe.


I think the issue here is that scientists want to get funding, have prestige, and perhaps learn about the world in the process. The public wants to be cured of diseases, see new technologies, and learn about the world too.

There is a disconnect between the motivation and capability of scientists in the current funding system and what the public wants. So an easy solution is that if the public wants reproduce-able science, they need to pay for it. I'm sure some scientists who couldn't make it into Harvard or Caltech (ie., me) and thus can't do cutting edge science would be happy to take the dollars, have a living, and just reproduce the work of others. But you can't simply declare to scientists they should do X while not enabling them to.


Science is a tool. Tools are means, not ends. We use sciene to gather facts we can agree on. But sciene isn't the "truth". Science isn't the facts. It a process. That produces facts. If and only if they are reproducible. Otherwise it is faith, religious, or a narative.

What's more, the scientific process is used discretely. One fact at a time. Understanding of our world, its meaning, these things are accumulative, over the entire context of our experience, and utilize things like feelings, and faith, and religion, and narative, to create.


> Taking papers at face value is really only a problem in science reporting and at (very) sub-par institutions/venues. > WRT the former, science reporters often grossly misunderstand the paper anyways. All the good reproducible science in the world is of zero help if science reporters are going to bastardize the results beyond recognition anyways...

Science is funded by the public, and done for the public. Good science reporting is very important to ensure that science continues to get funded. Too often scientific papers are written in a way that makes them incomprehensible to anyone outside of the field, whether that is through pressure to use the least amount of words possible or use of technical jargon.


There's also the option that papers are written the way they are so that they still remain papers and not books. A five page paper on someone's findings is much easier to read than a 20-30 page paper where field specific knowledge is redefined and explained instead of referenced.



I know I made the original "dwindling funding" claim, but it's actually a red herring. I should have said something like "increasingly competitive nature of grants". The argument relevant to this article is NOT about whether there's enough funding for science. Rather, the argument is about how difficult it is to get funding. I think it's fairly uncontroversial (and correct) to say that getting good science funded has become considerably harder over the years. I'm not sure anyone has done the work of trying to quantify that; maybe someone else can help find data for that.

But to address your question anyways:

1. USFG scientific funding instutitions are only one source of science funding. There are many others. If you look across the federal government, there's a downward trend: http://www.aaas.org/sites/default/files/DefNon%3B.jpg

One must also take into account non-federal-governemnt sources, which in many cases have substantially decreased their investments in R&D since 2008.

2. As pcnt of GDP there's been a steady decline: http://www.aaas.org/sites/default/files/RDGDP%3B.jpg

3. From an impact-on-culture perspective -- which is the relevant one in my comment -- I think (2) is more intresting than your data and also more interesting than (1). The question should be "how difficult is it to fund good science", not "how much are spending in absolute or relative terms". This is, of course, very difficult to quantify. But looking at percentage of GDP is at least better than looking at absolute dollars.


Its not repetition for its own sake. Its for the sake of good science.

I find many who are against repetition have certain views that are helped by soft science.


I suppose I should remind you, then, that the up-thread comment we're responding to was written in the context of a physics experiment.

And I'm about as far from "soft sciences" as you can get.


Oh I was not commenting on what you do. I was commenting that many who hold that view of non repetition have leanings that are favored by soft sciences. They dont feel the "need" for repetition even in hard science. It is a personal bias thing that influences their need. It can happen when emotion overcomes hard logic and science.


> many who hold that view of non repetition have leanings that are favored by soft sciences. They dont feel the "need" for repetition even in hard science.

Really? You're suggesting that psychologists (to arbitrarily pick a softer science) would deny physicists (to arbitrarily pick a harder science) should reproduce their studies where possible?

That seems remarkable to me, perhaps I've missed these discussions. Can you provide evidence that this is a pervasive movement in some sciences, rather than the opinion of a few?

"Many" is a trigger weasel word, of course, and needs backing up.

My interpretation -- perhaps incorrect -- is that you feel the softer sciences are wilfully undermining the quality of harder sciences. I very much doubt this is the case. Some philosophers of science and some softer science key influencers may introduce difficult and challenging questions about the appropriateness and usefulness of some research methodologies (as are people in this thread) but I doubt they'd make the blanket assertion you're suggesting.


You're clearly reading far too much into the parent. Do you have an emotional attachment to "soft" sciences that is causing you to be defensive in the face of no attack at all?


Project much? Looks like you're the one getting all heated and angry at someone defending the social sciences and are trying to attack them. It's better not to bring emotion into discussions of science.


Agreed , only important studies are worth replicating, the problem is that no major journal will publish a replication. Journals in general push for novelty rather than quality here.

b) can be a problem with meta-analyses and reviews. When gathering data "from the literature" , not all the data gathered is of the same quality/certainty, which can have a compounding effect. Or when someone from a mathematical or computational field tries to create a model using data reported in the literature. It is often difficult when working in an interdisciplinary environment to assess the quality of everything you read, especially if you re not familiar with all the experimental methods.

Also, off topic, but i wonder why you chose a throwaway account to weigh into this. i hope it's not a "science politics" reason.


Results can be irreproducible even if nobody was dishonest.

Reproduction is a way of bolstering rigor.


codewise, wouldnt the analog be writing a new set of unit tests for each dependency you pull?

it'd probably make sense to do that, actually, so you can verfiy that the dependency actually fits your use case as time goes on.


> I think we need to foster a culture of honesty and rigor. Of good science. Which is decidedly different from fostering a culture of "repetition" for its own sake.

That is not sufficient. Honesty and rigor is of course required for good science, but it is not sufficient.

Even if honesty and rigor you WILL still get false positives. Statistics is used to measure how likely this is. Statistical methods do nothing to tell you if any particular case happens to be a false positive. For many studies a confidence of 95% is considered good enough to publish, but if you do the math that means a honest researcher who publishes 20 such studies has probably published one false result! If there are 20 studies published in a journal statistically one is false. Thus replication is important.

It gets worse though, the unexpected is published more often - if it is true it means a major change to our current theories and this is important to published. However our theories have been created and refined over many years, it is somewhat unlikely they are wrong (but not impossible). Or to put it a different way, if I carefully drop big and small rocks off the leaning town of Pisa and measure that the large rock "falls faster" that is more likely to be published than if I found they fell at the same speed. I think most of us would be suspicious of that result, but after examining my "methods and data" will not show any mistake I made. Most science is in areas where wrong results are not so obvious.

> It's impossible to re-verify every single paper you read

True, but somebody needs to re-verify every paper. It need not be you personally, but someone needs to. Meta-analysis only works if people re-verify every paper. Note that you don't need to do the exact experiment, verifying results with a different experiment design is probably more useful than repeating the exact same experiment: it might by chance remove a design factors that we don't even know we need to account for yet.

> And I'm pretty sure literally no scientist takes a paper's own description of its results at face value without reading through methods and looking at (at least) a summary of the data.

I hope not, but even if they check out, it doesn't follow that things are correct. Maybe something wasn't calibrated correctly and that wasn't noticed. Maybe there are other random factors nobody knows to account for today.

The above all assumes that good science is possible. In medical fields you may only have a few case studies: several different doctors saw different people each with some rare disease, tried some treatment and got some result. There is no control, no blinding, and a sample size of 1. But it is a rare disease so you cannot do better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: