The last sentence of the edit the author provides is the key insight of this piece.
> the existence of experts using heuristics causes predictable over-updates towards those heuristics.
That's the essence of this piece. If you expect that consultation with experts will leave you with a more accurate picture of things than before consultation, you should first be sure that their heuristics are not equivalent to reading a rock with a single message painted on it, otherwise no matter what your conclusions will be biased towards that rock. "X is an expert and X says Y is good so I should have more confidence that Y is good than before" is not a useful conclusion if that conclusion came from X looking at a rock that says "Y is good."
The Queen example in particular, but all of the others as well, is a warning that looking only at the accuracy of predictions is not enough to avoid this problem. In order to make sure that those predictions are useful for yourself, you have to ensure that those predictions actually incorporate new information.
Which is the rub, right? How can a non-expert reasonably come to a conclusion of whether or not an expert's prediction is baseless or is actually solid/insightful?
One way is to see if they are doing any new context specific work to make their assessment.
Did the doctor run any tests, perform any investigation, or just tell you the most likely cause for your symptoms. That is to say, did they preform any expert analysis on you specifically or simply tell you a statistic for people like you?
Kahneman addresses this in Thinking, Fast and Slow. (Which I highly recommend, by the way.)
He argues that "expert intuition" is only helpful in an environment where intuition can be trained. That is, where there is obvious, immediate, and frequent, feedback on actions. All the examples given in the post take place in environments where there is no opportunity for the "experts" to receive feedback on their advice.
I came up with the Goldilocks (meta?-)heuristic[1] for that: Only trust someone to say X is too high if they can also tell you when X would be too low.
A corollary of which would be e.g. "Don't trust a skeptic that says 'X won't Change The World' unless they can tell you which developments would Change The World."
Clearly by listening to some rando on YouTube that is pointing out how the whole system is rigged /s
In seriousness, I think it is actually possible for people to understand enough information that they need to make a decision, even if they don't understand it to the level of an expert. I apologize for bringing Covid into this, but here was my analysis for understanding the mRNA vaccines:
1. The mRNA vaccines contain a small snippet of mRNA wrapped in a lipid bubble. This mRNA codes for spike proteins that are present on SARS-COV-2.
2. Your body takes up these lipid cells, translates the mRNA into spike proteins, and then your body recognizes those spike proteins as foreign and builds an immune response to them.
There is really nothing in the above (i.e. mRNA translation, the immune response, etc.) that I didn't learn in high school biology. There are certainly a ton of details that an expert is much more aware of. And, in evaluating my risk, there is certainly a ton of stuff there that I don't know, e.g. what's the probability of my body having a severe negative (a) immune response or (b) other reaction to the spike proteins in my body.
But all that said, even given all of the things I couldn't know because I'm not an expert, the rough details made it clear to me that, in any case, getting vaccinated should certainly be less detrimental than actually getting Covid, which was highly likely. That's why I get frustrated by some of the "trust the science" messages. You don't need to "trust" the science. The basics of the science are understandable by anyone with a high school degree.
Another thing I think is important to understand is that it may make a ton of sense to give very different societal recommendations versus individual recommendations. For example, I think both of the following are easily provably true:
1. Publishing recommendations of "eat less and exercise" is ineffective in combating obesity at the societal level.
2. For an individual, eating less and exercising is the number one way to lose weight.
That is, we know that most people are unable to stick with the recommendations of eating less and exercising more, and we have decades of data to prove it. For an individual, though, if you are able to set up a system to stick with your plan, this is the best way to lose weight.
For sure. It basically shuts down the idea that you can assess your confidence in an expert based on experience! Instead you have to understand the underlying principles, which is no easy task. To me the solution is to delegate the oversight; hire someone else to understand and assess the reliability of the experts. Obviously that has a whole mess of problems too…
> To me the solution is to delegate the oversight; hire someone else to understand and assess the reliability of the experts. Obviously that has a whole mess of problems too…
Mainly, it has the exact same problem you chose it in order to avoid, you now have to understand and assess the reliability of an putative expert in the domain of understanding and assessing the reliability of experts in your original target domain.
consult another expert and hope his heurestics are different from the latter, ad infinitum (or you yourself end up creating a meta-heurestic of their heurestics)!
Not really. Get three opinions from professionals giving the default answer and you have three default answers. Because it’s non-competitive it doesn’t even require coordination. Get three home appraisals in 2007 and they’re all coming in at the offer. They don’t have to know each other, they’re just aligned with the same individual incentives.
> Get three opinions from professionals giving the default answer and you have three default answers.
What's the likelihood they've all standardized on the exact same default heuristic?
But even if they did, at least in some of the examples giving the same default answer would be literally impossible when 2nd opinions are taken into account.
The security guard example is instructive. Scatter a truckload of security guards throughout the entire building. They cannot all occupy the same space at the same time. Consequently, the sound of ostensible wind to one security guard is the sound of a robber breathing to another security guard.
Scatter a truckload of rocks throughout the entire building. Now you have a bunch of goddamned rocks.
I'm no digital signal processing professional but by substituting rocks I'd say we suffered a loss in fidelity.
> What's the likelihood they've all standardized on the exact same default heuristic?
In some cases really high. Professionals are often under the same constraints, have no reason to be diverge, and even are incentivized to converge in opinions. These are not independent probabilistic events.
To my earlier example, _many_ appraisers adopted the heuristic of “appraisal = offer + irrelevant_random_noise”.
You security guard example doesn’t really apply to professional opinions. They’re usually done independently. By hiring multiple security guards, you’re forcing them (or at least encouraging them) to spread out. Sure, you’d get a similar effect if you hired ten doctors to spend 20 minutes with you all at the same time. They couldn’t all listen to your heart and tell you to take an aspirin. But if you visit them one at a time they can. So it’s more like ten security guards all watching one camera feed from different rooms.
Examples of this problem aren’t made up. Citigroup accidentally sent $900 million dollars to creditors. An issue I believe is still in litigation about a year later and has been a huge loss. It was approved by three people.
> the existence of experts using heuristics causes predictable over-updates towards those heuristics.
That's the essence of this piece. If you expect that consultation with experts will leave you with a more accurate picture of things than before consultation, you should first be sure that their heuristics are not equivalent to reading a rock with a single message painted on it, otherwise no matter what your conclusions will be biased towards that rock. "X is an expert and X says Y is good so I should have more confidence that Y is good than before" is not a useful conclusion if that conclusion came from X looking at a rock that says "Y is good."
The Queen example in particular, but all of the others as well, is a warning that looking only at the accuracy of predictions is not enough to avoid this problem. In order to make sure that those predictions are useful for yourself, you have to ensure that those predictions actually incorporate new information.