It is my experience that listening to experts normally is a good idea, but the idea of a scientific journal publishing research that we should all listen to scientists and not hold anti-consensus views, has more than a little possibility for bias on the part of the researchers. Moreover, would it not be odd if those who held anti-consensus views were not overconfident? People of average or lower-than-average confidence would defer to the consensus. It's almost like saying "people who disagree with others more, are more disagreeable".
It's saying that people who hold anti-(scientific)consensus views generally believe themselves to be more well-educated than they actually are. That's a far cry from merely being disagreeable. It doesn't seem to say anything about actually knowledgeable scientists holding anti-consensus views, except insofar as to reiterate the thesis.
Idk, anecdotally I don’t think I’m right about my heterodox nutrition views because I’m more educated, I just observe the consensus view of eating 12 servings of grain per day is wrong because I feel worse listening to their advice than eating what I naturally prefer.
Education must come into play. Your own case illustrates this, as there is no such consensus view.
And even granting the possibility that you are exaggerating for effect, it's hard to find a field of knowledge with more contention than dietary nutrition, so one has to be even more diligent about acquiring expertise and not fooling oneself.
Education matters for the consensus (for arbitrary reasons) but in my own life, my prior experiences with food inform my next meal. Education doesn’t come into play at all.
Let me pose this another way, why would you expect me to change the amount of bread i eat because of another year education? I already had 13 years of teachers telling me it should be the foundation of my diet and it didn’t stick. Why would the next year of education be any different?
You seem to be conflating "education", in the general sense, with education in a specific subfield.
I wouldn't expect another year of English literature, calculus, and nuclear physics courses to teach me anything about biology, civics, or textile manufacturing that I could apply to my day-to-day life, either—but a year of education in nutrition science, the practical workings of government, and traditional spinning and weaving techniques? That makes a huge difference.
The modern consensus (or perhaps official, since consensus may be different) is that you should be eating more vegetables than grains.
So you appear to agree with the consensus of experts that grains should not be the basis of your diet. This has been the us governments recommendation for over a decade (https://en.m.wikipedia.org/wiki/MyPlate)
Sure but "my personal experiences match up with the expert consensus from the past decade" (and really longer, the 2005 food pyramid was for slightly more grains than veggies, not double) isn't a particularly strong argument for disagreeing with the expert consensus.
If what you're saying is "I disagree with expert consensus from the 80s", well then sure, but so does the expert consensus! That's how the scientific process works (slowly).
Which is ironic considering what the discussion is about: you conducted your own scientific experiment on yourself, disproved the consensus-accepted rule, and came to new conclusions.
To achieve that, you needed to have enough confidence in yourself and your ability to measure and feel stuff (which some people think is bad, apparently) versus some consensus which purports to know better.
Courageous new science regularly blasts old knowledge and education to bits, and that's a good things.
Maybe I’m one of the disagreeable people this article is referring to, but what is a scientific “consensus” exactly? A consensus in your country? A consensus in your region of the world?
Fact is we live in a post modernist society. There is no such thing as consensus, only what particular bubble and social framework your frame of reference is operating from.
This was more or less my response as well. The 'academy as monolith of truth' is so 20th century. And, of course, disagreeing with Newton is very different that disagreeing with Freud. And we certainly aren't surprised that dupes got the Dunning Kruger.
But I worry about the impact of this line of thinking as we see it repeated in mainstream contexts. "Consensus as measure of truthiness" is incredibly problematic.
Newton wrote enough esoteric works to match Freud. However, it is either skipped entirely or discussed as some kind of hobby because he was promoted into superstar of science long time ago, and anything that stuck out from that public image was cut without any regard to what Newton thought. So there's quite a lot to disagree with for a regular person, but the regular person simply have no idea it exists.
> Consuming foods with ingredients derived from GM crops is no riskier than consuming foods modified by conventional plant improvement techniques.
In surveys, everything hinges on how you phrase your question. In this case, I'm compelled to agree: I don't believe there's a health risk from eating GM crops.
I do think GM crops are bad, though, because they are associated with strain patenting and seedless plants; because they contaminate neighbouring organic plots with non-organic DNA; because they are often sold in conjunction with herbicides; and frankly, because I hate the way their lobbyists act on our legislators.
> I do think GM crops are bad, though, because they are associated with strain patenting and seedless plants; because they contaminate neighbouring organic plots with non-organic DNA; because they are often sold in conjunction with herbicides; and frankly, because I hate the way their lobbyists act on our legislators.
Are most of these an intrinsic problem with "GM crops" in general or with "most GM crops as delivered now"?
The average citizen has no control over policy other than blunt instruments — in this case, rejecting purchasing of GM crops entirely, and advocating for others to do the same.
The nuance between “GM crops are safe” and “the politics and commercial incentives around GM crops introduce significant ancillary harm” is necessarily abandoned.
That is another phrasing issue, because it would be reasonable to take the question as a practical one meant to refer to the things that actually exist, but it would also be reasonable to take it as a theoretical one like you're suggesting.
Well, most of the criticism don't apply to, e.g. golden rice, which could improve the food security and nutrition for a billion people, including preventing severe deficiencies that cause blindness.
> associated with strain patenting
strain patenting isn't really a GM-vs-non-GM issue. As to golden rice-- sure, it's patented, but it's also licensed in a way that most of the people cultivating it wouldn't pay any license fees.
> seedless plants
Golden rice isn't seedless.
> because they contaminate neighbouring organic plots with non-organic DNA
Because of how rice is cultivated and the fact that it self-pollinates, the potential for this is extremely limited.
> because they are often sold in conjunction with herbicides
Not the case here.
> frankly, because I hate the way their lobbyists act on our legislators.
Here, there's not much desire to cultivate it "here", though obviously governments abroad do need to approve it. The amount of pro-gm lobbying here is limited, though, because there's really just not much direct fiscal interest in approval (licensing it gratis and letting farmers retain seed neuters that).
Of course, there's some, in part from the desire for an opportunity to show GM as wholesome.
I was careful to hedge myself, with phrases like "associated with" and "sold in conjunction with". I know there are exceptions.
I have no problem with golden rice, except if it's grown adjacent to some organic farmer's plot - that would potentially destroy his business. In fact I'd eat golden rice - but I've never seen it on sale.
While the distinction might be really important from the perspective of long-term policy making (or research, investment, etc), it is also practically unimportant from the perspective of people using heuristics to make decisions in the here-and-now (i.e. most people, most of the time).
Sometimes one also hears that "free-range chicken don't taste better" -- but that's not the point!
(In the case of free-range chicken, the problem is that most of the time they aren't, in fact, "free-range" at all, for a variety of reasons. But the point remains.)
Agree. People on both sides seem to misunderstand the issue. Informed GM opponents will talk about the systemic problems caused by introduction of GM crops in to our ecosystems, farming practices, and economies, and GM proponents will misunderstand those complaints and say "no you don't understand, they're still healthy". It's like people are talking past each other. Sure, some GM opponents are afraid of how the foods affect them, and perhaps misinformed, but that does not mean that all arguments against GM foods are invalid!
I think it's pretty obvious to everyone what the downsides of people being ignorant are, I would be more interested in seeing some work trying to measure or quantify the damage done by people blindly following mainstream consensus that turns out to be incorrect
Much of our culture applauds the anti-consensus actor, the lone
visionary who goes ahead despite the disapproval of the crowd. And
that's understandable. As Spike Milligan put it "Progress is like an
old car. It takes a crank to get things started." Progress depends
upon the unreasonable man. What looks like "knowledge over-confidence"
turns out to be "justified confidence" in widely unrecognised truths.
Further, in entrepreneurial circles, if you're prepared to risk your
own money and time on a crazed venture then that's celebrated.
But in reality you create risks to externalities far beyond your own
wallet. Such a culture absolves the reckless engineer of causing great
harms because they are "a genius" and we want to ride their coat
tails. Some of this seeped into Silicon Valley culture where many tech
people see the digital world as their own personal laboratory to
experiment on masses like bugs in a petri dish.
This could be contrasted with the socially conscious non-consensus
actor, who departs from crowd wisdom at great personal loss and just
wants to be left alone to do their own thing. That's a different breed
of individualism. It's often rooted in falliblism rather than
over-confidence. Such a person, out of wisdom, knows how stupid they
can be themself, and sees the crowd as no better.
I would say dangers at the other end of the spectrum, of blind
unthinking assent, group-think, conflict avoidance, is much better
documented and understood. Whether that's a negative or positive
depends on the context. On a frontier rugged individualism is an asset
while more measured conformity tends to work in long established
communities. The contrast of the North American frontiersman and the
Nordic observer of small town "Jante Law" is marked.
That would be interesting i know how it has personally effected me and others. All i can gather is that over time the people blindly following mainstream consensus are isolating themselves together and getting more and more radical ideas.
to tease out the nuance, the mechanism of mainstreaming isn't an extension of stupidity (ignorance is certainly a more accurate term), but of limited focus and stress. most people don't spend much time thinking about the wider world and how it should work very deeply, so they follow the herd on most things, because it's a decent heuristic and it has a social benefit.
it's a literal luxury to have enough time and mental space to think for more than a few minutes on any given issue. that's one of the most valuable "legs up" that wealth gives you, not so much the money itself, but the room to think (and unfortunately, often to try to manipulate the world around you in a self-serving/self-centered way--the corrupting of personal integrity that's as old as time).
yah, often those moments are too short for deep thinking anyway, which really requires contiguous, distraction-free hours. the more pertinent issue is that devices tend to themselves fragment potential contiguous blocks of time and condition our brain for shorter attention spans (just like what happened to me right now, as i was in the middle of doing something else, but stopped to respond!).
Similarly, people don't want to evaluate interesting restaurants, they want to jump on the bandwagon known as Yelp or Google reviews. There's something wrong with jumping on the bandwagon, but there's more right than wrong.
it depends on what you're looking for psychologically. chain restaurants are the most popular, but that's because they're "safe". other folks want adventure, to wander off the beaten path. sometimes that brings great reward and other times it doesn't. it does, however, provide the adventurer satisfaction in the adventure itself. similarly, the mainstreamer gets comfort from the safety of their choice, apart from the enjoyment of the food.
note that these aren't mutually exclusive groups of people, but typically phases of our own selves. for example, i rarely want chain food, but every once in a while i do. and sometimes i go and enjoy it because the people i'm with want it and enjoy it.
This is always going to reflect, in large part, who is in power in a society, since the mainstream consensus is so heavily influenced by the top. And in societies where people, by inches or by miles, become unable to speak freely - the top can effectively dictate consensus.
In a society where those in power are are not only primarily motivated to achieve a healthier society and nation, but also have the intelligence and wisdom to achieve it - then following the mainstream consensus could achieve something quite remarkable.
In a society where those in power are primarily motivated by their own interests, or lack the wisdom or intelligence to positively lead society forward, then of course everybody following the mainstream consensus means you're, vice versa, headed towards something quite awful.
I'm generally pretty biased against police shooting someone with a knife. Until one time I saw a video of one guy with a knife stabbing 5 police officers, with their guns drawn, before any of them fired a shot. Sometimes your intuitive sense of what is possible is actually quite wrong.
This article is not about ignorance per se. It is about views in opposition to well-established consensus, e.g. "rejection of vaccines or opposition to climate change mitigation policies." These views are highly damaging to society and individuals.
Isn’t this tautologically true when consensus views are assumed correct? (then the consensus view will be judged as appropriately confident while the non-consensus views will be judged as inappropriately confident)
It’s also interesting to consider the human psychological factor that the stronger the consensus, the more effort it takes to hold a dissonant view — and plausibly one must project greater confidence in order to hold steadfast. This will only amplify, the more one is hounded with “consensus”.
Consensus views might well be right! But is this study telling us anything new/interesting, beyond regurgitating common sense with a veneer of scientism?
They divide questions into "objective" and "subjective" to solve this. They are testing "objective knowledge" using simple science quizzes with questions that are indeed pretty much factual and not under any dispute. I downloaded the questions to check that. For example: "All radioactivity is man-made: true/false", "Venus is the closest planet to the sun" etc. They call this objective knowledge and it seems mostly sound.
There was only one question I noticed where they appear to be asking for agreement with an (incorrect) consensus. They included in their "objective knowledge":
COVID-19 is transmitted mainly via small respiratory droplets through sneezing, coughing, or when people interact in close proximity. True
But this isn't right (in my opinion). Although they made the question ambiguous by adding the word "mainly", the data seems inconsistent with SARS-CoV-2 mostly spreading this way, and over time opinion shifted towards a much greater emphasis on aerosols that may be created even by people who are apparently asymptomatic. Nor was this unknowable at the time: SARS-1 apparently could spread through drainpipes in apartment blocks, so if SARS-2 is indeed similar to SARS-1 it makes sense it could spread in similar ways.
At any rate there are much bigger problems with the study than how they determine people's levels of scientific knowledge. The whole study is based on the idea that a bunch of highly controversial ideas are in reality "consensus", e.g. they consider that masks being highly effective is a consensus. But it certainly wasn't just before COVID, the consensus was the exact opposite. What is the point of even caring about a consensus of people who will change that consensus overnight without any new research findings?
> Isn’t this tautologically true when consensus views are assumed correct?
I worried that this study might be circular reasoning in that way. Instead they quizzed knowledge on related fields but not directly connected to the disputed idea.
The people involved bet higher amounts on their ability to score well on objective measures of related sciences, but scored worse.
Isn’t that still circular though? Knowledge of the mainstream research isn’t the same as “knowledge of what’s correct”; the person who holds the non-consensus view may have equal or higher levels of knowledge about the path that got them to non-consensus.
At some level you might reasonably expect what you described to be tautological: the person holding the mainstream view knows more about they things surrounding that view. But do they know the alternatives?
It’d be interest to compare the knowledge level the person holding the consensus view has about the non-consensus view to vice versa.
> It’d be interest to compare the knowledge level the person holding the consensus view has about the non-consensus view to vice versa.
That's a great question -- harking back to the idea that for the best incarnation of debate one should be able to argue one's opponent's case to their satisfaction, and then make one's own counter.
The unfortunate problem is that it's often far easier to cook up BS than to bust it, so the anticipated effort tilts the scales massively. But in cases where there are specific/highlighted competing alternatives, it would be very interesting to study this.
> Isn’t that still circular though? Knowledge of the mainstream research isn’t the same as “knowledge of what’s correct”; the person who holds the non-consensus view may have equal or higher levels of knowledge about the path that got them to non-consensus.
No, because the things they're asking about are middle school level science stuff.
What the study is describing is a psychological trait known as dogmatism.
You might be familiar with the words "dogma" or "dogmatic" in their religious sense: a set of beliefs that members of a religion are expected to accept as true.
But there is another, broader way that "dogmatic" is used that does not necessarily have to do with religious dogma.
In psychology, dogmatism is a defined as exhibiting great certainty about the correctness of one's views and an unwillingness to consider new evidence, or an unwillingness to adjust one's views in light of new evidence.
Last year I read about a fascinating 2020 study that measures dogmatism in a novel way, and I created a little web app that adapts the study's technique to determine your own level of dogmatism:
Going back to the 2022 study that OP posted, one thing that irks me is the way the authors lump all religious belief into one bucket:
> Because several issues that we examine have come into conflict with religious thinking, and because religion can itself be a polarizing factor for attitudes and beliefs, we also test for an attenuation for issues more associated with religiosity.
I'm Catholic, so that makes me religious. But I have friends who are Jews, Muslims, Buddhists, Baha'i, etc., and their belief systems are vastly different than mine. Yet we all have this trait "religiosity" that really tells us very little about what any of us believe. The question the study uses -- "How important is religion in your life?" -- to gauge religiosity, without making any distinctions between vastly different religious beliefs and practices, is bizarre to me.
I can't help but be skeptical that religiosity in the abstract tells us anything meaningful.
Leaving aside whether Catholics, Jews, and Muslims have "vastly different" worldviews or religious beliefs, a question like that still has meaning in the West, where religion is increasingly under attack. I think it's already the case that the practical difference between adhering to a normatively conservative religion versus not is greater and more important than the differences among such religions.
The paper seems to have largely equated "objective scientific knowledge" with liminal consensus. For instance, among their "objective scientific knowledge" questions asked whether, "The novel coronavirus was unleashed in a laboratory in Wuhan and spread from there." Another was "The numbers of people that have died from COVID-19 are artificially inflated."
The latter was at least removed from the results, but emphasizes the point nonetheless that they found it appropriate to include questions that had nothing to do with science. So you end up with results that affirm that people who disagree with consensus disagree with consensus.
> . For instance, among their "objective scientific knowledge" questions asked whether, "The novel coronavirus was unleashed in a laboratory in Wuhan and spread from there." Another was "The numbers of people that have died from COVID-19 are artificially inflated."
This is an incorrect reading of the paper. Those are statements intended to determine whether a participant has anti-consensus beliefs, but do not occur on the objective measure we're talking about.
Then participants were asked to bet on their ability to pass a middle-school level science test. People with anti-consensus beliefs bet higher on their performance and did worse on the test compared to people without those anti-consensus beliefs.
These are the test questions they were asked:
1. True or false? The center of the earth is very hot: True
2. True or false? The continents have been moving their location for millions of years and will continue to move. True
3. True or false? The oxygen we breathe comes from plants: True
4. True or false? Antibiotics kills viruses as well as bacteria: False
5. True or false? All insects have eight legs: False
6. True or false? All radioactivity is man made: False
7. True or false? Men and women normally have the same number of chromosomes: True
8. True or false? Lasers work by focusing sound waves: False
9. True or false? Almost all food energy for living organisms comes originally from sunlight: True
10. True or false? Electrons are smaller than atoms: True
11. True or false? All plants and animals have DNA: True
12. True or false? Humans share a majority of their genes with chimpanzees: True
13. True or false? It is the father’s genes that decide whether the baby is a boy or a girl: True
14. True or false? Ordinary tomatoes do not have genes, whereas genetically modified tomatoes do: False
15. True or false? Sound moves faster than light. False
16. True or false? The North Pole is a sheet of ice that floats on the Arctic Ocean. True
17. True or false? The ozone layer absorbs most of the sun’s UVB radiation, but not UVA radiation. True
18. True or false? Nitrogen makes up most of the earth’s atmosphere. True.
19. True or false? Antibodies are proteins produced by the immune system. True
20. True or false? Pathology is the study of the human body. False
21. True or false? The skin is the largest organ of the human body. True
22. True or false? Ligaments connect muscles to bones. False
23. True or false? All mutations to a human’s or animal’s genes are unhealthy. False
24. True or false? Uranium is an element found in nature. True
25. True or false? Radioactive milk can be made safe by
boiling it. False
26. True or false? The process of splitting uranium or plutonium atoms to create energy is called nuclear fission. True
27. True or false? Venus is the closest planet to the sun. False
28. True or false? It takes 24 hours for the earth to orbit the sun: False
29. True or false? A “Red Dwarf” is a kind of planet. False
30. True or false? The universe is expanding. True
31. True or false? Earth is the only place in the solar system
where helium can be found. False
32. True or false? Gravity is the theory that serves as the foundation for modern biology. False.
33. True or false? The earliest humans lived at the same time as the dinosaurs. False
34. True or false? “Survival of the fittest” is a phrase used to describe how natural selection works. True
No, you're the one that's suffered an incorrect reading here. They created different objective knowledge questions for each of the different studies (there were 5) within this study. I am referencing the COVID "objective knowledge" quiz. "Subjective knowledge" for each study was determined by a single question where participants were asked to rate their knowledge of the various topics on a scale of 1-7.
The questions for this ("objective knowledge" of COVID) begin on page 36 of the supplementary materials page. [1] I predictably offered the most extreme example, but there are various other questions that are also more about liminal consensus than "objective fact".
But, perhaps most importantly, the observed effect from this study was extremely small. And with only 34 questions, even a small number of questions that measure anti-consensus views, rather than scientific knowledge, is enough to create a false effect through what's basically an obfuscated tautology: people who hold anti-consensus views, hold anti-consensus views.
I now see the question you mentioned from study 5. IMO study 5 is not the interesting substudy. As you mention, you cherry-picked the extreme example.
> But, perhaps most importantly, the observed effect from this study was extremely small.
The substudies that looked at those set of 34 questions above had an extremely significant result-- both statistical significance and magnitude of effect. I also don't see anything too controversial in that set of 34 questions. Do you?
Yes. It's a complete non sequitur to ask somebody to self rate their knowledge on e.g. climate change, and then try to objectively measure that knowledge by asking e.g. what the largest organ in the human body is. It doesn't even make any sense. It's like me asking to self estimate your knowledge of javascript, and then giving you a quiz on C# to measure that objective knowledge.
The study is also filled with various issues. For instance they also removed everybody who completely agreed with the scientific consensus on studies 1-4 (see buried in page 44 on the materials and methods supplement) who undoubtedly also rated their objective knowledge highly. They drew participants from outlets like Amazon Mechanical Turk where people are paid peanuts to carry out menial tasks ($0.85 for this study). That's obviously not going to be a representative sample of society, let alone America. The paper claimed to have preregistered their study, yet didn't provide a link to such - which completely negates the point of preregistering, and so on.
Something I find quite troubling is how increasingly common studies of this sort are. This study is mostly junk, but it's enticing click bait junk that will also undoubtedly bump up this journal's impact fact as other's unquestioningly cite it more with the goal of generating interest even if in lieu of science. What exactly were the peer reviewers/editors doing? It seems that so long as one concludes the right thing in exciting enough fashion, what editorial process there is becomes quite deferent, a la Sokal.
> by asking e.g. what the largest organ in the human body is.
Which is part of why they had subscores for related fields. Obviously we can't quiz about the actual controversy, because we disgaree on the facts.
> It doesn't even make any sense.
I think it's interesting that people who disagree with these consensus have, on average, lower overall literacy in the sciences-- both adjacent sciences and other science, but believe they have above average literacy and will outperform the average on tests.
Ignoring your attempt at predictably headed ad hominem, you again cannot make your 'conclusion' from the study. They used a non-representative sample, removed the results of those who fully agreed with the consensus, and just generally did everything trying to massage their numbers into the conclusion they wanted to make.
Ironically, the numbers don't even support that if you do use the subscales. In all studies the overall effect from opposition was a fraction of a single point difference. In most studies opposition to the consensus in climate change was also predictive of a higher than average level of field-specific knowledge.
> Ignoring your attempt at predictably headed ad hominem
???? ad hominem? Where? I'm saying it's very difficult to quiz about the actual controversy itself, because then we get very close to the point of disagreement and risk confounding. The research explicitly mentions this aspect of design.
> They used a non-representative sample, removed the results of those who fully agreed with the consensus, and just generally did everything trying to massage their numbers into the conclusion they wanted to make.
I don't love use of Mechanical Turk for social sciences. It's still an interesting finding. Of course, more and higher quality research should be used to confirm the effect and gain additional nuance.
> In all studies the overall effect from opposition was a fraction of a single point difference.
The overall effect from opposition was a fraction of a single point difference per unit of opposition.
You're definitely correct there on my misreading of the tables. To further clarify, I also decided to see precisely what "points" meant rather than continuing to just skim. They chose to rate true/false answers on a -3 to +3 scale driven by the respondent's certainty. This means one question wrong, out of 34, was able to drive up to a 6 point difference at max certainty. And so, somewhat serendipitously, my point, with some 'modification' remains that opposition in no case was rarely able to explain even a single field-specific question missed, at least not at max certainty.
As for the replicability - again, this study arbitrarily removed people fully in line with the consensus. That makes it fairly safe to say that replication is a nonstarter. But I'd also add that another red flag for social science papers is when they collect a large number of variables that end up having no relevance to the published conclusion. Large numbers of variables is a key resource in p-hacking. And this paper was collecting all sorts of data that went completely unused and had nothing whatsoever to do with what they ultimately chose to publish.
> opposition -0.664790.06842 2130.90896 -9.717 <2e-16 **
Looks like each point of opposition was about 2/3rds of one more incorrect true/false question. So the people who were most opposed scored >3 questions worse on a 34 question test, on average.
> again, this study arbitrarily removed people fully in line with the consensus.
Median filtering and trimming saturated measures is common in research like this-- hopefully designed into the original protocol. I do agree it would be nice to see their preregistration.
But, weird things happen at the tails-- truncation effects, etc.
Like mentioned, I was more interested in the impact on the subtopic specific results because of what we discussed. Giving a quiz on American history to judge your knowledge of French history is obviously not a reasonable idea, even if the skill-set overlap would probably give at least some weak correlation. In the field-specific binarized case (which removes the -3 to +3 noise), the impact is 0.09 points. So that translates to a fraction of a question difference in results.
And while I'm aware trimming extreme outliers, regardless of the side they end up on, I'm unaware of any study entirely removing a segment from their sample, from one side only, which is critical to your entire hypothesis, and doing so without any explanation, let alone justification, whatsoever. One thing I'd observe is that the observed effect is small enough that if this culled group had a knowledge specific score below the mean, then it's likely that their entire conclusion would be invalid.
One other issue we have not discussed is the methodology not only resulted in a very non-representative sample, but also was indirectly testing something else. The surveys were done on the internet, and all of the general knowledge questions (besides the covid ones, which were just...) have answers which can be looked up in a matter of seconds.
> I was more interested in the impact on the subtopic specific results because of what we discussed.
I think they're both interesting. Broad overconfidence in performance in basic science combined with low performance is an interesting characteristic of a population. The fact that this is represented in those with contrarian views is interesting.
> So that translates to a fraction of a question difference in results.
The subscale finding, just shows that this same phenomenon appears to also extend to the subjects they are contrarian about. And it's about the same magnitude of effect, because there's very few questions on the subscale.
On the big test, a participant does about 2% worse per unit of disagreement. On the subscale, they do about 1.8% worse per unit of disagreement. The effect appears to have the same magnitude and is statistically significant in both cases.
This is an invitation to do studies on specific fields of disagreement with better samples and more questions on the subscale. But this early research casts a wide net across many types of anti-consensus view. Study 5 is a small, possibly flawed step in this direction.
I'd be particularly interested in a 2 variable analysis that attempts to model how one's overall objective performance and level of disagreement models objective performance in the specific field where they disagreed.
> outliers, ...
This is all moving goalposts when we were originally discussing the question set. I've already agreed that sociology and psychology research via Turk is problematic and faces many confounds and selection problems.
Something I recently considered was that many of the questions, the "good" questions, do ultimately test ideology more than knowledge. The obvious example would be something like "Humans share the majority of their genes with chimps." This is of course factually true true, but it seems much more likely to test ideology than scientific knowledge, in part because of how it is phrased. Imagine the question were framed as "Scientists claim that humans share the majority of their genes with chimps." Now you are no longer testing agreement with the scientific view, but knowledge of it.
And perhaps a bigger issue is that there were literally zero questions that would do the same "trick" vice-versa for somebody who ideologically agrees with one of the topics, but otherwise lacked much knowledge. Testing this would have been quite easy by simply throwing in questions that sound like they fit the consensus but have an important aspect that makes them false. An example would be "Evolution is a process of improving a species which, over time, may lead one species to become an entirely new one." That is, of course, false - but somebody of low knowledge and high consensus agreement, would likely consider it to be true.
My personal hypothesis would be that extreme views tend to be associated with high confidence and relatively low knowledge. And this could be extremes of agreement or disagreement with a topic. As Betrand Russell put it, "The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt."
If it's factually true, it's not testing ideology.
> but it seems much more likely to test ideology than scientific knowledge, in part because of how it is phrased
Is there anyone credible who disagrees with the statement here?
I think a bigger issue is that it may be testing not knowledge but careful, critical reading. I can score well on this test, but my flippant quick answers are not very accurate. I am not surprised those that are worse at critical reading might be more likely to have anti-consensus views.
> My personal hypothesis would be that extreme views tend to be associated with high confidence and relatively low knowledge.
That doesn't seem to be the case here; while we don't know the "1" full agreement results, the rest seems to be a pretty dang linear trend line.
Nobody is forced to accept a fact if they do not want to. There's a large gulf between awareness of knowledge, including facts, and agreement with such. This is an especially important nuance in this quiz, where the precise implications/meanings of knowledge vs ideology are a critical part of the study.
Imagine that there was a secular Islamic expert. And he was quizzed on Islamic ideology in a similar fashion, such that he was obligated to imply belief in such, in his answers. Assuming he is being as accurate as he can in his responses, he would end up scoring as completely ignorant of Islam as he marked all beliefs as false.
I disagree there's a clear trend here, even without the 1s, because there were clear exceptions such as e.g. climate change where, even by their metrics, anti-consensus views were predictive of greater knowledge. So you end up having to, at a minimum, limit the stated claim to specific fields.
> Nobody is forced to accept a fact if they do not want to. There's a large gulf between awareness of knowledge, including facts, and agreement with such. This is an especially important nuance in this quiz, where the precise implications/meanings of knowledge vs ideology are a critical part of the study.
OK, well, here they expressed disagreement with stuff that is not just the scientific consensus, but the overwhelming agreement. If you want to phrase it as disagreeing rather than knowledge being lacking-- I think this is unhelpful. I do not think the nuance you're driving at here is worth complicating the questions or instructions.
> because there were clear exceptions such as e.g. climate change
No.
All 7 areas of controversy did worse on the entire test. It was not statistically significant for climate change and evolution. It was highly statistically significant for 4 of the remaining 5.
6 of the 7 did worse on their on subscales (significantly significant for 4 of these 6), with effect sizes of -.83, -.65, -.28, -.82, -.88, -.55; climate change had a non-statistically significant finding of an effect with slope .03. I do not consider this a counterexample.
You've been really advocating for study power and significance of results... when it favors your argument.
It is absolutely worth making the questions more clear, because this is the entire point of the study. Do people who do not agree with the consensus on something lack knowledge, or do simply not agree with said knowledge? Conflating the two in your own study does nothing but set yourself up to observe a tautology (those who disagree with a consensus, disagree with a consensus), which a cynic may interpret as not entirely accidental.
And no, I'm not referencing statistical power. If I haven't made myself clear, I think this study and their numbers, are both going to fall well into the endless black hole that is the replication crisis, which is especially pronounced in social psychology - which has an aggregate replication success rate in the 20s. What I am saying is that you can't make your broad claim based even what I suspect are deeply massaged figures.
Sorry, no. Overwhelming agreement along with readily self-observable characteristics is good enough. You can claim that the kidney is actually the largest organ of the human body, but everyone else agreeing is good enough-- especially when we can look at pictures from other people we trust, arrange to check a cadaver ourselves if we really care, etc.
I mean, I guess it's possible Big Skin (tm) has rigged all the measures and faked everything. /s
> What I am saying is that you can't make your broad claim based even what I suspect are deeply massaged figures.
One submeasure on one subpopulation having an outcome that does not support the claim but is also not inconsistent with that claim doesn't invalidate the claim.
You are talking about:
* A non-statistically significant finding
* Showing an opposite slope, but of tiny magnitude compared to the slopes in the other direction
When Galileo made his case for a heliocentric universe, there was no silver bullet he offered. He simply personally felt it to be more probable than the geocentric model in spite the overwhelming evidence to the contrary given the evidence of the time. Would you thus claim he was lacked knowledge (of the geocentric universe) if he chose to respond false on a question "True/False Everything in the universe revolves around the Earth?" Measuring agreement is different than measuring knowledge.
As for your hypothesis, what would you propose would be a null hypothesis to test your hypothesis? It seems to me that it would be "Somebody who opposes a scientific consensus will score about the same in a test of general knowledge as somebody who supports it." And the climate change example confirms that null hypothesis, which would thus reject your hypothesis.
This kind of research just looks like political maneuvering to attack dissenters. Part of me thinks that researchers should instead work on figuring out how to better convince and educate people. But maybe a lot of really smart people have already investigated that problem and figured out that writing research papers which undermine their opponents is the most effective tactic.
The core problem is that most of the division is rarely about any object-level beliefs, it's almost entirely about tribal politics and power. What a boring game.
You've heard of "Blue skies research", this is what I call "The sky is blue research". Working very hard to show something that is pretty much self evident. I'd love it if we could learn how to change people's minds. Calling people stupid hasn't worked.
Actually, I think it's important to not just hand-wave but understand the overall psychology of it.
I've known very informed cranks (some of whom are on the edge of being useful contributors to the field by proposing alternative mechanisms and rigorous tests) and poorly informed cranks.
But it is an interesting, if not entirely unexpected finding, that people with anti-consensus views have much lower scores of knowledge in related but not directly connected scientific fields, but are willing to bet higher amounts on their test scores.
This is a phenomenon any scientist who has been practicing for any length of time is likely to have encountered. I can't even recall all the statements from ignorance (not pejorative--they simply weren't educated on the subject matter) people have made to me about various physics and engineering topics that have been settled for decades in knowledgeable circles.
The people who debunked these theories were quite familiar with these existing models and how to build rigorous tests to differentiate these cases from new ideas.
The study tells us that most detractors are not people like this, but instead people with below-average understanding of the field and related fields-- and who are willing to bet more on their performance of objective tests of those fields than the non-detractors.
Sadly, the spherical “scientists” in a vacuum rigorously and unselfishly digging to get closer to the “truth” is only a fantasy found in school textbooks. What is called “science” has nothing on “truth” or “reality”, it's a specific method of creating virtual models that are many many degrees less complex than the “real thing”, and then treating real thing as if it was its model to achieve some outcome. In some way, that's how humans adapt to environment, and change it. Even a commoner can read 18th and 19th century works that spread those stereotypes on “Reason”, “seeking the truth”, etc., and see how rosy the initial reasoning was. Despite the fact those sources are long forgotten by the public, the status quo of everyone's education has been formed by them.
Phrenology was not an “error”, “bad page”, yada yada on the Glorius Path of Progress. It was The Science. (Well, maybe some “overconfident” scientists disagreed with it for personal reasons, but they were “not representative”.) It was evident that all those unfamiliar people from far away lands, and all those lower class social parasites were less developed than the Educated Man armed with Scientific Knowledge, and if was perfectly reasonable to try finding the source of such a pitiful condition inside their skulls.
It shouldn't surprise anyone that multiple generations taught to think like that resulted in people casually talking about races and signs of degradation at dinners or, well, political rallies. They were given that model as truth.
By the way, archaeologists and biologists still use observations and concepts from phrenologists without ever “cancelling” them. Squint your eyes one way, and it's “pseudoscience”, squint them the other way, and it's “science”.
Or the idea that makes any pop-sci nerd twitch and giggle in anticipation: so there is no success in finding the material soul, but maybe we can study what that soul thing does by looking at its interactions with material world, and using scientific method? Let's call that… right, psychology.
Your response is completely orthogonal to what I said. I'm not too interested in hashing out the basics of the philosophy of science for the purpose of pedantry on the internet. That said--
Specific theories and predictions of phrenology were debunked and discarded with evidence. That happens to make almost all the utility evaporate, but that's not to say there's not quite any left.
Of course, there's a whole lot of predictions and data from alchemy that became the body of knowledge of chemistry, too, even if a lot more evaporated.
The other thing to point out is that the overall scientific method itself has evolved and is still evolving. Debating various kinds of errors and their nature made in the 18th century when even early 19th century practice was sufficient to refute them is not likely to be productive.
> The risk of bias in this research seems rather high. Of the form of "our believes are correct and everybody who disagrees is underinformed".
It's not circular reasoning like this. They asked a bunch of middle-school science level questions in related fields.
The interesting bit isn't that people who disagrees are misinformed. It's that they assess they will do better on the test, and bet more that they will do better, than people who don't.
I always find looking at methodology for surveys like this far more interesting than reading the papers. Two things immediately stand out. First, the study selected participants from place such as Amazon's Mechanical Turk + Prolific Academic. Both are services which pay participants peanuts to engage in various menial tasks.
But perhaps more interesting are the questions they chose to ask to determine "objective knowledge of scientific facts." Among the questions used to deem this was one which asked whether "The novel coronavirus was unleashed in a laboratory in Wuhan and spread from there." I can't, for the life of me, understand why people's truth in science is on the decline.
This is an extremely smug and tone-deaf take on the deterioration of trust in public health institutions and messaging.
They reference covid in their abstract, so it's fair game to discuss here: where was the scientific objective-truth-machine when the Noble Lies[0] where being told? It was silent and complicit.
The general public is not as knowledgeable as experts, but they can sense when they're being lied to, or when political bias is distorting the messaging, and they don't like it. When they can't trust the experts, they fall back on their own lack of knowledge, for which they are now being mocked. But lying to and manipulating people means they won't trust you, no matter how much of an expert you provably are.
This study seems to want to make it simply about the Dunning-Kruger[1] effect, which puts most of the blame on the average person, while ignoring why people feel like they can't trust experts in the first place.
What sense is this? Do you have evidence of its existence? An explanation of how it works? Because in my experience the public is severely deficient in critical thinking skills. They don't really have a good sense of truth or falsehood. They make guesses about who's telling the truth based more on presentation, politics, or popularity than on facts or logic.
Right now, there's an anti-intellectual and anti-authoritarian fashion causing many to conclude that anything a professional scientist says is a lie, but it would be wrong to cherry-pick a few examples where they're right as evidence of anything. How do such people "sense when they're being lied to" other than by using heuristics (see above) which are more likely to be gamed by entertainers and politicians than by scientists?
That people (as a whole) are stupid is old news. However, it is not because they don't behave as rational robots you seem to describe.
It is totally possible to tell that someone is bullshitting you without having a university degree. Maybe that someone is using a salesman language to prove a point (maybe unwittingly, without knowing better). Maybe they are quite obviously demanding a pledge of allegiance to some list of statements without revealing any reasoning behind them (maybe unwittingly, as they never go to reasoning themselves). Maybe there's a giant gap between what they say (emotional slogans) and what they do (filling in checkboxes or making some numbers line up at any cost). There's plenty of chances of ordinary person to have such experiences in ordinary life, and become familiar with them.
Also, I'm a bit puzzled by “anti-intellectual and anti-authoritarian fashion”. In my view, anti-intellectualism is a big topic that stretches from making pop-cultural works “accessible” to “shut up and calculate”, and is not a recent fashion. Neither is anti-authoritarianism, which might be the central cause here. It's not the science itself of which people are skeptical (quite the contrary, vague “science” has been the base of cosmological beliefs for a long time — even snake oils are often “scientific”), and not the intellectuals (even though modern ones should at most be called, erm, “columnists”). There is just a doubt that they value telling the (whole) truth more than they value their position, however unremarkable, in societal, professional, and bureaucratic chains of power, along with government workers, journalists, an so on.
You mischaracterize the study. The interesting finding is not that people who hold anti-consensus beliefs have lower scores on objective tests about related fields. It's that they are willing to bet more on their performance on those objective tests despite ultimately scoring worse.
That is, there's a much larger difference between their subjective assessment performance and objective performance on the test metric (of both that field and other fields) than in people who don't hold the anti-consensus belief.
Could it be a kind of survivor bias? Of people who come up with anti-consensus views, you only see the really confident ones; those less confident in their unorthodox views probably don't tell others so much.
There are two axes here, each with two options: Axis 1 = "I understand" or "I don't understand". Axis 2 = "I think I understand" or "I think I don't understand". That leaves 4 options. (Obviously, these can be thought of as two continuous axes as well, and I'll address that later.)
Option 1: "I understand and I think I understand" -- good, you understand, you'll probably come to the same conclusion of the experts who also understand, since you're working with the same data they are.
Option 2: "I understand, but I'm not sure how well" -- good, you understand, see option #1 with the added benefit that you'll look for more evidence to increase your confidence.
Option 3: "I don't understand, and I don't think I do" -- good, so you at least won't form opinions because you know you aren't informed, and maybe even will be motivated to look for evidence to move yourself into categories 1 or 2.
Option 4: "I don't understand, but I think I do" -- i.e. overconfidence. Your conclusions will likely be wrong, since you don't understand the data you're basing those conclusions on, but since you think you do, you will stick to those conclusions in the face of contradictory evidence or expert advice.
So, yes, I would fully expect the result this study found, from a purely logical analysis.
Now, as I said, both confidence and understanding can be thought of as spectra rather than discretized categories. In that case, you get a continuous transition between those resultant options, but generally you still get three discrete classifications in that space: "I act like I know but am wrong", "I refrain from forming an opinion", and "I act like I know and am correct."
So, do we blindly ignore common sense and instinct, handing it over to supposed experts that have been wrong repeatedly about very fundamental concepts like diet and education? So many accepted practices promoted by experts have been and are just wrong. New information is gathered and past certainty becomes present doubt.
Of course, blindly ignoring information because it is outside of your understanding is equally incorrect. There is no easy solution to this issue, but generous skepticism is wise, especially when someone purports to be expert and certain about something.
Is this particularly meaningful? It seems to me that a critical assumption underpinning this study having any meaning is that someone’s ability to answer consensus questions correctly is correlated to their ability to answer non-consensus questions correctly. This does not necessarily follow - a large number of the ideas which are now “objective truths” were at one point non-consensus fringe views (e.g. germ theory, non-heliocentrism, lead paint being bad, climate change, etc.), suggesting that link doesn’t exist all that strongly, in my opinion.
I don't know anything about the subject, but I scored pretty high in overconfidence and I totally disagree with this scientifically sound, peer reviewed paper...
All scientific issues are controversial by definition. What is obvious is not called science. Even prehistoric men could throw rocks, and knew where they would fall.
> The consequences of these anti-consensus views are dire, including… death.
Hmm, so this is what makes people die. Interesting.
“Science is the belief in the ignorance of experts.
When someone says, “Science teaches such and such,” he is using the word incorrectly. Science doesn’t teach anything; experience teaches it. If they say to you, “Science has shown such and such,” you might ask, “How does science show it? How did the scientists find out? How? What? Where?”
It should not be “science has shown” but “this experiment, this effect, has shown.” And you have as much right as anyone else, upon hearing about the experiments–but be patient and listen to all the evidence–to judge whether a sensible conclusion has been arrived at.”
The study tested the actual knowledge of people on these subjects. They found that people who disagreed with the scientific consensus knew less about these subjects than people who agreed with the consensus, but overestimated their confidence in their knowledge of the facts. So in other words those who agreed with the consensus were better informed and had a more accurate assessment of their level of knowledge.
Scientists need to stop writing papers like this one until they get their own house in order. From a quick read through, I can see quite a few aspects that come across as misleading to me:
1. It says:
"Although the knowledge gained and shared by the scientific community about [COVID] gradually increased, public health professionals prescribed traditional, time-tested, and general epidemiological measures to try to mitigate its spread ... consensus on how to mitigate viral contagion was well established even at the beginning of the pandemic."
But this is nonsense. Nothing about global lockdowns was traditional, time tested or general. Nor was there any consensus about doing these things, with the WHO having previously strongly stated in 2019 that during a pandemic border closures, contact tracing and quarantines were "not recommended in any circumstances".[1] The pandemic started with public health people saying that closing borders or avoiding people who had arrived from China would be racist. They also opened by stating that there was no evidence masks were useful (in fact, because that's what the available research said at the time), then flipped almost overnight to masks being so useful they must be mandated. No new research came out that triggered this. There were quite a few other inversions on apparently basic topics but there's no need to list them all.
Given that sequence of events it is quite crazy to describe what happened with COVID as traditional and time tested, or to claim that there was a well established consensus right from the start about what to do. No such consensus existed and never did: there were people both inside and outside epidemiology opposing pandemic measures the whole time.
2. They say:
"This is why self-reported understanding decreases after people try to generate mechanistic explanations, and why novices are poorer judges of their talents than experts (33, 34)."
This second claim is key to the whole paper's argument, but there are only two citations for it. Citation 34 is of Dunning-Kruger. The DK study has bizarre and severe flaws that make me wonder how it ever became as famous as it did, for example, one of the supposedly general tasks in which they pitched their tiny handful of psych undergrads against the "experts" was literally "joke expertise", a totally subjective task. I wrote up some of the logic and design issues with it three months ago in a HN thread [2].
3. Although it isn't communicated in their abstract, several topics and most notably climate change are exceptions to their claim:
"individuals most opposed were the least knowledgeable about science and genetics but rated their understanding of the technology the highest in the sample. A similar pattern emerged for gene therapy, although not for climate change denial ..."
Probably not a huge surprise here. The sort of people who write articles disagreeing with climatology are often in my experience scientists, former scientists, engineers, even meteorologists. The sort of people who get upset about GM foods aren't.
4. They rely on Mechanical Turk. The exponential growth of Mechanical Turk usage in the social sciences is troubling. MT isn't designed for doing research and it's easy for people to pretend to be in demographics they aren't, moreover, they are financially incentivized to do so. Academics like using MT because it's more convenient and cheaper than going out and doing large scale legwork, but the results have little validity. In particular attention check failure rates are horrendous on this platform but it doesn't seem to bother anyone. They use two platforms and the second explicitly advertises itself by trashing the reliability of MT based studies [3]. Amusingly, the second platform is literally called "Prolific Academic"!
The abstract itself is already contradictory. How can a "controversial scientific issue" at the same time have "expert consensus"? If there was a consensus, then it would not be controversial. Or perhaps there's often not as much consensus as some people would like to believe.
The point of this paper is to conflate two issues. One is disagreement with consensus, based on underlying problems with adoption of the practices such a consensus might seem to support.
As others have mentioned, GM crops are likely fine to eat, however it is not trivial to ask whether we should adopt the uses of gm linked pesticides etc. Roundup causes cancer - they paid the fees from the court cases they lost. Now it is being replaced by an even less understood insecticide.
It is not trivial to ask whether we should have a high level of scrutiny toward recently generated vaccines and methods. That is not the same as disregard for vaccination.
By conflating these issues, in the commonly arrogant (overconfident?) tone of academia, this is more of the same Newspeak garbage that has become all too common