Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It certainly is counterproductive. We should be incentivising replication, not novelty. If you started giving out grants aimed at replicating results we would speed up scientific progress ten fold.



Replicating studies seems like the perfect training exercise for graduate students. I wish it were a mandatory part of training at most schools.


The problem is... if you can't replicate the experiment, how do you know if it's the graduate student's fault or the experiments fault?


I always thought that if I ran my own lab, I would have rotation projects be to replicate a recent result from the lab. Then, a lot of the infrastructure and support would still be there, so it would be pretty clear if the fault lay with the experiment. Plus, it would reinforce the notion among trainees that the point of science is to be replicated, and that the hard part of doing something novel is figuring out what to do, not actually doing the work.


> how do you know if it's the graduate student's fault or the experiment's fault?

Usually by

- checking the original publications to make sure you are repeating the experiment correctly

- checking (and monitoring) the experimental setup to make sure it is doing what you think it is doing and you aren't introducing errors

- checking the data analysis

- running "sanity check" experiments with the same setup to make sure it has no obvious flaws

- comparing with recent replication experiments by other researchers

- showing that the experiment and results are repeatable by multiple people in different organizations with their own lab setups

- consulting with the original authors (who may be helpful or unhelpful) if they are available

- comparing against other data sets

- comparing against results from analytical modeling or simulation

- looking for alternate explanations of the anomalous results and checking for whether they might be occurring

etc.

As others have noted, the same methods apply to sorting out conflicting experimental results regardless of who conducts the experiments.


Publish methods and results in a database. Every result will be a draw from a distribution. Today only exciting ones get published but it would be better to see the full distribution.


If it has been replicated many times then you know whose fault it is.


The more people fail, the more likely that the fault is on the experimenters' side


you could ask the same question of someone with a phd.


This is a very good point, we are all still fallible humans no matter what degree w hold


Isn't this common, as a warm-up for grad students? It's just hard to publish.


Not in my experience, unfortunately.


I don't think that is the answer either. We would just get a lot of replications of boring results. And replications of those replications.


That's good. Replication is valuable for science. Boring is fine. Thinking that boring results aren't useful is exactly how we ended up with the current problems in research. You just need to make sure that you don't give scientists an incentive to redo studies that have been replicated so much that additional replication is useless.


"just"? That's exactly the problem I was referring to.


And he's pointing out that your problem is not really so big so as to allow a casual dismissal of the solution. You've reduced the solution to absurdity by creating a scenario that is at least on face easily remediable.


> that is at least on face easily remediable

But it's not easily remediable. And that is the point.

Just imagine how that would work in practice. Somebody does an original experiment that gives exciting results. Let's say they get a 1000 Science Points for that. Now somebody replicates that experiment. How many points should they get? Should you get as many points for replicating a 1000 points experiment as replicating a 500 points experiment? Why would you do original experiments at all anymore? Isn't it easier just to replicate original experiments where all of the hard work has already been done?

Awarding points for replication is a nonsense idea.


Well, you wouldn't get 1000 science points for it before it's replicated by others. Once it's replicated enough times by credible people, the points are awarded, and the process is now considered "done".

If someone replicates it after this process and finds different results, it's "new research" again and needs to be replicated again.


I agree, it's kind of like people handing in their "finished" feature with "just the tests" missing. It's not finished. It maybe doesn't even work. Simple as that.

I think we shouldn't even accept papers that haven't been replicated twice by independent teams.


Doesn't really matter WHEN you get the points for your original experiment. My argument stays the same.

Honest researchers doing original research to the best of their abilities. That's how you get good results, and what drives progress. The rest is just bureaucracy. The need for replication will be just another bureaucratic add-on to catch dishonest researchers, and researchers who value prestige more than the truth.


Giving science points to anyone for every replication they do is one idea. Another way to encourage replication without detracting from original research might be to encourage doing one replication. Maybe something like getting masters students to do them as their thesis or final project.


That's fair. I wanted to point out that the answer is more complex than "don't incentivise replication".


Why do you assume that the reward for replication needs to be a constant?


?

I don't.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: