what you refer to is not Science,its pseudoscience. Pretty much any crap gets published these days, review standards are abysmal and there is hardly any replication.
Only thru replication can you actually claim there is Science.
The end result is the same. When a big tv channel says "Eating fat is definitely bad because this study says so" nobody will check the actual study and see that it was done on 3 mice for a duration of 10 days with completely unrealistic settings. It's still science, it's just misinterpreted or extrapolated.
I see it everyday on HN, people _believing_ we'll all migrate to Mars and terraform it by the end of the century which means climate change is a non issue. Or that we'll get fully autonomous cars by 2020 because Elon "Science" Musk said so and people _want_ to _believe_ in it even though absolutely nothing supports it. It really isn't much more than astrological predictions at that point.
People pick whatever "science" supports their make believe-world and go with it.
Not really true. Tesla is at the forefront of applying machine learning in real-world settings. It's definitely not unrelated to science, in my opinion. If autonomous driving in 2020 is on the timetable, Karpathy (Head of AI) is probably confident that it is possible. Musk is very aggressive on timelines (but always wraps it in "it is probable that", which to him is cautious but to which translates to almost certainly in newspapers), but from what I know, he has delivered most promises (which he sees more as projection of previous trends), albeit a little late.
There's probably some real science happening behind the scenes, experimenting with ML algorithms (Karpathy is certainly up to it). But the challenge of getting it to work sufficiently well in the real world isn't science, it's engineering. Meeting schedules isn't science, it's management. And so on. "Believing in science" has nothing to do with believing musk will or will not succeed, and I think you would find the vast majority of people who think he will succeed (me included if you don't mind late) don't attribute it to "because science".
I don't see the line between science and engineering as clearly as you do, apparently. Is CERN a science or engineering project? Drug design? Genuinely curious what qualifies as science. Seeing some articles, it's not the quality. Applied vs. fundamental also seems like a difficult line to actually draw.
Edit: Especially in ML, a large part of the research is done in companies.
Science is about discovering things via experiments and observations about the world, engineering is about making things that work. There is a tiny bit of overlap.
CERN is a gigantic engineering project used to do a bit of science. Experimenting with different concrete mixes to find one with a set of qualities is science used to let you do some engineering. OpenAI's dota bots are the sort of thing that might fall in the overlap of both discovering things and making things that work.
Maybe more to the point, "believing in science" means "believing that those experiments and observations reveal true facts", which has nothing to do with whether or not we believe Musk will succeed at his self driving car ambitions.
>what you refer to is not Science,its pseudoscience. Pretty much any crap gets published these days, review standards are abysmal and there is hardly any replication.
That's like the argument that USSR was not real communism, etc.
At some point, science is as science does.
There's no some holier, better checked, domain of practice. It is what it is, and it sometimes has replication, more often than not it doesn't.
One factor you cannot ignore is the exponential increase in the number of 'scientists.'
In times past, for better and for worse, college was generally relegated to an extremely small section of generally over-performing society. And of these a tiny minority would then go on to pursue the post-grad level education that would culminate in becoming a scientist. In today's society college has become high school 2.0. And to some degree post-graduate education is going down the same path. For instance today more than 1 in 8 people have some sort of postgraduate degree. [1] Sourcing that because it just sounds absurd. In other words today more people have a postgraduate education than the total that went to university in the 70s.
This has resulted in an exponential increase in the amount of stuff getting published as well as a simultaneous and comparably sharp decrease in the overall quality of what's getting published. So I would actually tend to agree with you. This cynical state of science is generally pretty accurate for the state of what passes as science today, but it was not always this way. 'Science' as a whole is in many ways reflective of the mean, and in the public mind even the lowest common denominator. And both of those have undoubtedly fallen far below what they were in times past.
I mostly agree, but the GP is also certainly correct though. There is in fact a "holier than thou" science and it is that which follows the scientific method. It is reproduced, empirical, fundamental science. Most garbage published in journals today does not meet that criteria, and even economists, psychologists and even sociologists call themselves scientists when they cannot possibly follow the scientific method in nearly every part of what they study.
I have a hard time believing that the issue is a dilution in the “quality” of scientists, but I would agree that ever-increasing competition for funds and jobs has produced some perverse incentives.
The consequences for publishing something that’s wrong but not obviously indefensible are often pretty low. On average, it probably just languishes, uncited, in a dusty corner of PubMed. It might even pick up a few citations (“but see XYZ et al. 2019”) that help the stupid metrics used to evaluate scientists.
The consequences of working slowly—-or not publishing at all—- are a lot worse. You get scooped by competitors that cut corners, and there’s not a lot of recognition for “we found pretty much what they did, but did it right.” Your apparent unproductivity gets called out in grant reviews and when job hunting. The increasing pace and career stage limits (no more than X years in grad school, Y as a postdoc, Z to quality for this funding) make it hard to build up a reputation as a slow-but-careful scientist.
These are not insoluble problems, but they need top-down changes from the folks who “made it” under the current system....
The replication crisis that's plaguing much of the social sciences, but especially psychology, did not cherry pick studies. It started with an effort to replicate studies only from high impact well regarded journals in psychology. [1] It found that 64% of the studies could not be replicated, leading to the curious outcome that if you assumed the literal and exact opposite of what you read in psychology (e.g. - what is said to be statistically significant, is not) - you would tend to be substantially more accurately informed than those who believe the 'science.' [1]
But more to our discussion, two of the journals from which studies were chosen were Psychological Science - impact factor 6.128, and the Journal of Personality and Social Psychology - impact factor 5.733. The replication success rate for those journals was 38% and 23% respectively. I'm certain you know, but impact factor is the yearly average number of citations for each article published in a journal. A high impact factor is generally anything above about 2. These are among the crème de la crème of psychology, and they're worthless.
As you mention pubmed, preclinical research is also a field with just an absolutely abysmal replication rate. And once again these are not cherry picked. In an internal replication study Amgen, one of the world's largest biotech companies, alongside researchers from MD Anderson, one of the world's premier cancer hospitals, were only able to replicate 11% of landmark hematology and oncology papers. [2] Needless to say those papers, and their now unsupported conclusions, were acted upon in some cases.
-----
All that said I do completely agree with you that the current system of publish or perish is playing into this, but your characterization of the current state of bad science is inaccurate. Bad science is becoming ubiquitous. However, I'm not as optimistic that there is any clean solution. There are currently about 400 players in the NBA. If you increased that 4,000 what would you expect to happen to the mean quality and the lowest common denominators? Suddenly somebody who would normally not even make it into the NBA is a first round pick. And science is a skill like any other that relies on outliers to drive it forward. We now have a system that's mostly just shoveling people through it and outputting 'scientists' for commercial gain. The output of this system is, in my opinion, fundamentally harming the entire system of science and education. And this is a downward spiral because now these individuals of overall lower quality are working as the mentors and advisers for the next generation of scientists, and actively 'educating' the current generation of doe-eyed students. This is something that will get worse, not better, over time.
Speaking of replication, my personal experience in very narrow field of audio DSP which is easy to test gave results of 9 papers impossible to implement mostly due to missing key details, 6 more where results only apply in specific test signals (total failure in reality), 3 where performance was overstated by over 12 dB in real samples. 8 were really good and detailed. Two had actual used test code available, one has it in printed form. None with the code were any good. :D
(IEEE database around 2005 in noise reduction, echo cancellation and speaker separation or detection.)
No, science is always science. Just because the media portrays certain things as "scientific truth" (or, for that matter, scientifically unsure) doesn't make it so.
Indeed, even if scientists claim something bogus, that doesn't make it science.
So...actually, yeah, it's a lot like the argument that the USSR wasn't real communism, any more than the Democratic People's Republic of Korea is democratic. People claim it to be X, other people take that claim as gospel and use it to paint X as terrible, despite the fact that the people making the claim are full of shit.
The argument that the USSR wasn't really communism isn't a semantic argument of "yeah it was a marxist utopia" but rather one of whether we support governments claiming to be communist. We don't have an example of successful communism, while we do have examples of successful science.
I have literally never seen that argument, and frequently seen the argument that communism (and/or socialism) is Bad, because the USSR was communist, and they were Bad.
Not that I'm saying you haven't encountered the reverse; I'm quite willing to believe that people who run in other circles (or make other claims) encounter different arguments. But yeah, I see the "we shouldn't want communism/socialism, it killed millions of people under Stalin" argument all the damn time.
>Depending on the definition of X, you CAN say that something is not X
That is a good strategy only if you already have a sample of the thing to derive a definition from.
To create a good definition you should examine reality, and see the thing as it actually behaves, first. Only then, once you have a reality-based definition, you can judge other specimens and use the definition to say whether they are X or not.
Else, you just impose some idealistic / non-empirical standards upon reality based on an arbitrary (since it's not based on observation) definition.
The land and the people existed (as a land and as a people) and gave its name to Scotland (and content to the definition), not the inverse. It wasn't someone making up the word first and others then checking whether the people in Scotland fit it.
>Only if you define 'science' as 'the thing that people labeled scientists do', can you arrive at your conclusion. I would define scientists as 'people practicing the scientific method'.
In real life, people call themselves and are called by others scientists if they have studied for and are employed as such, whether or not they "practice the scientific method" and even more so, whether or not they practice it properly.
So defining scientists as 'people practicing the scientific method' (and e.g. excepting people with Ph.Ds who practice it badly or care to get grants to the detriment of science) is rather the canonical 'no true scotsman' fallacy.
In that sense, no scientist could ever falsify data or make up a theory and cook its research to support it, or prove something that a company paid them a grant to prove, because "by definition" such a person wouldn't be a scientist.
The concept of science, which is the empirical study of reality, does not change. There are many concepts that can share the same word - is a Scotsman someone born in Scotland, one who moved there, one who shares Scotland's culture and ideals? There should be different labels for each of these concepts but there aren't.
The importance, relevance, trust, and reality of science may change, but the underlying concept does not. Nevermind all the other forces trying to co-opt 'science' for their own purposes.
How many papers and articles describe a purely empirical inquiry into reality and accurately describe all shortcomings and sources of error? 10%? 1? It matters that our trust in "science" may continue to degrade, but none of those change the underlying concept/ideal.
Only thru replication can you actually claim there is Science.