Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s definitely the perception, though I’m not sure how true it actually is. A battle-tested analysis pipeline or experimental control suite is a huge competitive advantage for a lab.

The catch is that it needs to both evolve and stay solid at the same time: it’s hard to predict what you’re going to want in three years—-or find the time to clean up code from the last three years, especially since many of those folks will have moved on.



It is an advantage but not crucial for grants/papers etc.


I agree that it doesn’t matter a whit for any single grant or paper, but one paper (or grant) rarely makes a career.

My claim is that people systematically underestimate the value of good code to a research program. Good infrastructure lets the lab focus on the scientific questions, rather than the logistics of moving and processing the data, which in turn allows them to publish more, better, and faster. This is true for a lot of things: some labs have fantastically good imaging pipelines, or have worked out how to rapidly train animals for certain behaviors, or can reliably do an assay that often fails in others’ hands, and derive a huge benefit from it. Some are so good that I wouldn’t even consider competing with them in their niche. My argument is that good code can also have returns like that.

As a personal example, my first paper at McGill took about three years to finish. The next took about a year and a half (and just came out). We’re on track to submit at least one—-and maybe as many as three—-papers this year. Some of this is due to practice, but a lot of it is due to the fact that we built reusable components instead of “the script that gives the numbers”




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: