Not reliably sharing source code is a big problem in health modelling too, it's an issue across academia. The last set of CS papers I read did all publish their code though, at least for the ones where I cared to look. Perhaps this problem is fading away with time.
Yeah, places like DeepMind or OpenAI don't, partly because their papers are extended press releases rather than precise descriptions of how to recreate their results. OK, that's fine, they aren't academia so they're paying for their own work. If they choose to write a paper at all it's a pure bonus over what's basically expected of them. For government funded research it's different of course.
I've spent a lot of time thinking about what could be done better and how. The problem is there's this overwhelming number of problems that track back to a few root causes that are basically intractable in the current social environment. Take bogus citations, or use of obsolete data. Completely standard in epidemiology to write papers that use values for IFR or other key variables that are 8 months old when far more up to date data is available. Or what about claims with citations in which the cited paper doesn't support the claim being made, or even contradicts it. I never see this in CS papers. I've seen it regularly whilst reading epi papers. Or papers where the key claim in the abstract is just fraudulent, like the Flaxman claims about the efficacy of lockdowns which just assumed its own conclusions in the construction of the model, and relied on assigning Sweden a ludicrously huge country-specific fudge factor in the model (4000x). Right, and the fact that this was done wasn't mentioned anywhere in the paper nor supplementary materials ... you had to read the code to find it (at least the code was open that time!).
You can tut and say well that shouldn't have happened, but of course there will be people who are tempted to dress up their chosen conclusion in the clothes of science. The question is really what mechanisms are responsible for detecting and preventing a fall in standards. But in science the only such mechanism is peer review and journals, which are hardly effective. Everyone is a part of the same system with the same incentives and there are no penalties for incorrect work, so bad papers are getting published in Nature and Science all the time, especially when aligned with the prevailing ideology of these institutions. University administrators are responsible in theory, or maybe granting bodies, but same problem: none of them have any stake in output quality. Ultimately to fix these problems you need to tie the rewards in academia to the correctness of results, but academia isn't culturally anywhere near ready to even think about that. Academic freedom implies the freedom to be wrong your entire career, of course.
Yeah, places like DeepMind or OpenAI don't, partly because their papers are extended press releases rather than precise descriptions of how to recreate their results. OK, that's fine, they aren't academia so they're paying for their own work. If they choose to write a paper at all it's a pure bonus over what's basically expected of them. For government funded research it's different of course.
I've spent a lot of time thinking about what could be done better and how. The problem is there's this overwhelming number of problems that track back to a few root causes that are basically intractable in the current social environment. Take bogus citations, or use of obsolete data. Completely standard in epidemiology to write papers that use values for IFR or other key variables that are 8 months old when far more up to date data is available. Or what about claims with citations in which the cited paper doesn't support the claim being made, or even contradicts it. I never see this in CS papers. I've seen it regularly whilst reading epi papers. Or papers where the key claim in the abstract is just fraudulent, like the Flaxman claims about the efficacy of lockdowns which just assumed its own conclusions in the construction of the model, and relied on assigning Sweden a ludicrously huge country-specific fudge factor in the model (4000x). Right, and the fact that this was done wasn't mentioned anywhere in the paper nor supplementary materials ... you had to read the code to find it (at least the code was open that time!).
You can tut and say well that shouldn't have happened, but of course there will be people who are tempted to dress up their chosen conclusion in the clothes of science. The question is really what mechanisms are responsible for detecting and preventing a fall in standards. But in science the only such mechanism is peer review and journals, which are hardly effective. Everyone is a part of the same system with the same incentives and there are no penalties for incorrect work, so bad papers are getting published in Nature and Science all the time, especially when aligned with the prevailing ideology of these institutions. University administrators are responsible in theory, or maybe granting bodies, but same problem: none of them have any stake in output quality. Ultimately to fix these problems you need to tie the rewards in academia to the correctness of results, but academia isn't culturally anywhere near ready to even think about that. Academic freedom implies the freedom to be wrong your entire career, of course.