Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Life is too short to spend time reading a essay that could have been condensed to:

Nothing solid. Work in progress.




For the general public yes. But for researchers, it's super useful to know.

I spent 2 years of my Ph.D. pursuing a dead-end research direction. If only someone would have told me, "yeah, we tried that and it didn't really work."

This is why socializing at conferences is useful. Researchers will admit over a beer to stuff they tried but didn't work out and here's why.

Because academia never publishes negative results, you'll never find that out by reading papers.


That’s the tragedy of all research - very few negative results get published. For all I know most of the things I’m working on at the moment have been tried and discarded


So, I used to think this, but seeing a lot of success and failure in machine learning projects has taught me that negative results can be quite difficult to interpret. You typically don't know if it's because the underlying idea is flawed, something went wrong in the experiment setup, some bug went undetected in the code base, or whatever, we explored the wrong corner of the hyperparameter space, or whatever. It can also be the case that the idea is correct, but the effect is miniscule compared to other effects.

Now, when a pile of groups try something and it doesn't work out for anyone, things start getting interesting... But if course it takes pretty open discussion to know when that's happening.


There are, broadly speaking, two types of negative results. The most numerous one is when the researcher tried a thing but does not understand why the thing did not work. There are so many knobs that just from the plain combinatorial perspective these results are not very valuable - they carry next to no information unless someone else takes the effort to understand why the thing did not work. But there is another group of results, encountered quite often, where the researcher either knows why the thing did not work, or at least has a solid, plausible hypothesis. I wish this latter group was more socially acceptable to publish and valued by the community. It could still be that the person is "holding it wrong" even in this case, but still, it would be useful. The era when one could cheaply try something out because their training run completes overnight seems to be gone for good.


Yah, I think that's where the experience of how failure happens in machine learning projects is helpful. I've seen wrong explanations for root causes drag out for months and sometimes years, as we tried different attacks that just weren't working. ML experiments are often quite cheap, compared to things like clinical trials or field studies... So we can indulge in running another experiment to confirm or deny our hypothesis quite easily.

Humans are extremely good at inventing explanations for things. When the world doesn't do what we expect, we then have a choice of whether to believe we had the wrong explanation or just got the experimental details wrong... And epistemic hubris is a hell of a drug.


> Because academia never publishes negative results, you'll never find that out by reading papers.

And someone should be prosecuted for it.


Ah, compelled speech. So First Amendment. /s


The person you're replying to is not American.


On this issue, I feel sorry for them.


Thank you for your sympathy. Not being an American is one of my greatest sorrows.

Btw, I wasn’t entirely sincere in my comment above either. It was just a way to express strong dissatisfaction with the state of affairs.


99% of proper good science is "Nothing solid. Work in progress.", it doesn't make it any less valuable. I prefer this honest and clear communication a hundred-fold over what comes out of universities' PR departments.


transparency is good


So is brevity




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: