Hacker News new | past | comments | ask | show | jobs | submit login

This is an anonymous appeal to authority, but FWIW I am a biologist and I am not convinced of any origin story. We lack evidence in every direction. Although our prior is that it's zoonotic, other experts I know acknowledge the possibility that we are looking at a lab leak. We simply don't have enough information to say.

However, there is very strong pressure to reassure the public that it's not lab-based, which would look bad for scientists and our generally pro-social efforts to help the world. I think this is misguided, because people are not stupid. They also can also observe the profound lack of evidence for zoonotic transfer, which frankly makes these assertions look like a coverup. It would be helpful if the scientific community were more honest and scientific about this issue. We should not jump to conclusions without evidence.




Yeah. Unfortunately public health researchers/biologists have very little credibility at this point so appeals to their authority won't work. In particular arguments of the form "we need more evidence" look weak given how many enormous leaps of faith the supposedly scientific establishment has been willing to make on the basis of no evidence at all for the past year, e.g. masks, lockdowns, asymptomatic infection (apparently now contradicted by a large study of >90% of the entire population of Wuhan!), now vaccines are being advertised as totally safe even though trial protocols are being abandoned left and right.

Public health research is really worrying, frankly. The quality gap between epidemiology papers I read in the last 12 months and computer science papers are shocking. Epidemiology routinely publishes papers that contain major errors that the authors clearly knew about and chose to ignore or cover up. The Flaxman paper is an example of how absurdly fraudulent it can get, yet we're told constantly that the "experts" have reached a "consensus" that isn't to be questioned.

At this point I'm pretty much ready to believe it came from a lab simply because scientists are claiming it didn't. My prior for public health researchers being honest has dropped through the floor.


I just indicated that I have spent much of my life thinking about these things. You don't have to believe that this gives me any insight, but like I said it might help you understand that I'm not just parroting some theory I've read, I'm working on it directly.

I understand your concerns about public health research. Have you thought about it could be done better? Curious what you think.

Computer science papers have their own quality problems. A shocking percentage of them don't include source code even when they describe experimental results. Here you have a field that could realize an ancient dream of perfectly describing their work, and yet they fail to do it. This is an obvious way that it could improve. What's the equivalent for epi?


Not reliably sharing source code is a big problem in health modelling too, it's an issue across academia. The last set of CS papers I read did all publish their code though, at least for the ones where I cared to look. Perhaps this problem is fading away with time.

Yeah, places like DeepMind or OpenAI don't, partly because their papers are extended press releases rather than precise descriptions of how to recreate their results. OK, that's fine, they aren't academia so they're paying for their own work. If they choose to write a paper at all it's a pure bonus over what's basically expected of them. For government funded research it's different of course.

I've spent a lot of time thinking about what could be done better and how. The problem is there's this overwhelming number of problems that track back to a few root causes that are basically intractable in the current social environment. Take bogus citations, or use of obsolete data. Completely standard in epidemiology to write papers that use values for IFR or other key variables that are 8 months old when far more up to date data is available. Or what about claims with citations in which the cited paper doesn't support the claim being made, or even contradicts it. I never see this in CS papers. I've seen it regularly whilst reading epi papers. Or papers where the key claim in the abstract is just fraudulent, like the Flaxman claims about the efficacy of lockdowns which just assumed its own conclusions in the construction of the model, and relied on assigning Sweden a ludicrously huge country-specific fudge factor in the model (4000x). Right, and the fact that this was done wasn't mentioned anywhere in the paper nor supplementary materials ... you had to read the code to find it (at least the code was open that time!).

You can tut and say well that shouldn't have happened, but of course there will be people who are tempted to dress up their chosen conclusion in the clothes of science. The question is really what mechanisms are responsible for detecting and preventing a fall in standards. But in science the only such mechanism is peer review and journals, which are hardly effective. Everyone is a part of the same system with the same incentives and there are no penalties for incorrect work, so bad papers are getting published in Nature and Science all the time, especially when aligned with the prevailing ideology of these institutions. University administrators are responsible in theory, or maybe granting bodies, but same problem: none of them have any stake in output quality. Ultimately to fix these problems you need to tie the rewards in academia to the correctness of results, but academia isn't culturally anywhere near ready to even think about that. Academic freedom implies the freedom to be wrong your entire career, of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: