Hacker Newsnew | past | comments | ask | show | jobs | submit | more danieltillett's commentslogin

When someone’s future salary is tied to acquiring $new/$shiny skill then you are going to get this sort of rational behaviour from the engineers. Smart people respond to incentives.


Something I have always found insane is all the regulations and paperwork hurdles around killing mice in labs. Want to kill 10 mice humanely in a lab then you need to fill out a 60 page report, send it off to an ethics committee for their opinion, and then wait 3 months. Want to kill a few hundred million mice in the wild with a poison that will result is a slow and painful death - go for it.


Eh as a PhD student in genetics, the stuff we do to mice is abominable. Researchers give them incurable insufferable cancers to study complex gene interactions. In my first lab, there were even signs on the mouse cages instructing the veterinarians not to administer eye drops even though the eyes were bulging out of their head. These mice are set up for a lifetime of suffering sometimes.


I think you should get out and see what farmers are doing to the wild mice.


It is not. One side is winning the long war and it is not the USA.


I used to be a non-technical manager “managing” a team of developers. I later (through circumstances) became a developer. Looking back at the games my team played with me I can see why many managers are cynical about the estimates they are given and attempt to negotiate.


I always overestimated because it was the only way that I could ever devote anytime at all to technical debt reduction.


I'd say you're including necessary maintenance in your estimates, not overestimating!


Not claiming it applies to you, but a lot of managers don't understand the difference between an estimate and a deadline. It comes back in all kinds of weird ways.


As a manager what every report wants is to be genuinely listened to by their manager. If you make the effort to really listen to what your report has to say you don’t need to worry about praise or other feedback.

The downside is real listening is incredibly time consuming - I struggle to get under 60 min per person per day (not all at once of course) without compromising its effectiveness.


Yes they are likely to be mirror twins [0].

0. https://en.wikipedia.org/wiki/Twin#Semi-identical_twins


Well look at that, I stand corrected. Or, correct to begin with, depending on how you look at it :). I misunderstood another definition of mirror image twins that I had read, but I see now that it does describe pretty accurately the differences between my nephews. Fascinating!


I assume you know about mirror twins [0]. Do you and your sibling happen to have opposite dominant hands?

0. https://en.wikipedia.org/wiki/Twin#Semi-identical_twins


An interesting question, no, we both have right dominant hands. Interesting article too, thanks for that one.


It certainly is counterproductive. We should be incentivising replication, not novelty. If you started giving out grants aimed at replicating results we would speed up scientific progress ten fold.


Replicating studies seems like the perfect training exercise for graduate students. I wish it were a mandatory part of training at most schools.


The problem is... if you can't replicate the experiment, how do you know if it's the graduate student's fault or the experiments fault?


I always thought that if I ran my own lab, I would have rotation projects be to replicate a recent result from the lab. Then, a lot of the infrastructure and support would still be there, so it would be pretty clear if the fault lay with the experiment. Plus, it would reinforce the notion among trainees that the point of science is to be replicated, and that the hard part of doing something novel is figuring out what to do, not actually doing the work.


> how do you know if it's the graduate student's fault or the experiment's fault?

Usually by

- checking the original publications to make sure you are repeating the experiment correctly

- checking (and monitoring) the experimental setup to make sure it is doing what you think it is doing and you aren't introducing errors

- checking the data analysis

- running "sanity check" experiments with the same setup to make sure it has no obvious flaws

- comparing with recent replication experiments by other researchers

- showing that the experiment and results are repeatable by multiple people in different organizations with their own lab setups

- consulting with the original authors (who may be helpful or unhelpful) if they are available

- comparing against other data sets

- comparing against results from analytical modeling or simulation

- looking for alternate explanations of the anomalous results and checking for whether they might be occurring

etc.

As others have noted, the same methods apply to sorting out conflicting experimental results regardless of who conducts the experiments.


Publish methods and results in a database. Every result will be a draw from a distribution. Today only exciting ones get published but it would be better to see the full distribution.


If it has been replicated many times then you know whose fault it is.


The more people fail, the more likely that the fault is on the experimenters' side


you could ask the same question of someone with a phd.


This is a very good point, we are all still fallible humans no matter what degree w hold


Isn't this common, as a warm-up for grad students? It's just hard to publish.


Not in my experience, unfortunately.


I don't think that is the answer either. We would just get a lot of replications of boring results. And replications of those replications.


That's good. Replication is valuable for science. Boring is fine. Thinking that boring results aren't useful is exactly how we ended up with the current problems in research. You just need to make sure that you don't give scientists an incentive to redo studies that have been replicated so much that additional replication is useless.


"just"? That's exactly the problem I was referring to.


And he's pointing out that your problem is not really so big so as to allow a casual dismissal of the solution. You've reduced the solution to absurdity by creating a scenario that is at least on face easily remediable.


> that is at least on face easily remediable

But it's not easily remediable. And that is the point.

Just imagine how that would work in practice. Somebody does an original experiment that gives exciting results. Let's say they get a 1000 Science Points for that. Now somebody replicates that experiment. How many points should they get? Should you get as many points for replicating a 1000 points experiment as replicating a 500 points experiment? Why would you do original experiments at all anymore? Isn't it easier just to replicate original experiments where all of the hard work has already been done?

Awarding points for replication is a nonsense idea.


Well, you wouldn't get 1000 science points for it before it's replicated by others. Once it's replicated enough times by credible people, the points are awarded, and the process is now considered "done".

If someone replicates it after this process and finds different results, it's "new research" again and needs to be replicated again.


I agree, it's kind of like people handing in their "finished" feature with "just the tests" missing. It's not finished. It maybe doesn't even work. Simple as that.

I think we shouldn't even accept papers that haven't been replicated twice by independent teams.


Doesn't really matter WHEN you get the points for your original experiment. My argument stays the same.

Honest researchers doing original research to the best of their abilities. That's how you get good results, and what drives progress. The rest is just bureaucracy. The need for replication will be just another bureaucratic add-on to catch dishonest researchers, and researchers who value prestige more than the truth.


Giving science points to anyone for every replication they do is one idea. Another way to encourage replication without detracting from original research might be to encourage doing one replication. Maybe something like getting masters students to do them as their thesis or final project.


That's fair. I wanted to point out that the answer is more complex than "don't incentivise replication".


Why do you assume that the reward for replication needs to be a constant?


?

I don't.


That is not what meritocracy is - it is rule by the cognitive gifted.

The argument for meritocracy is that meritocratic societies (and by extension businesses) kick the butts of all others. It might not be fair, but it works better than all the other alternatives.


Fire them and then give everyone a raise with their salary.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: