Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That isn't what Scott is saying.

His default was to listen to those in authority and responsible. Instead he wishes that he listened to those who did independent thinking and their own projections with verifiable reasoning from the data.

This doesn't mean don't listen to epidemiologists. There were plenty of epidemiologists who were plenty concerned in early February. It means do not pay particular heed to epidemiologists who were selected to be listened to via a politically motivated process.

And my own editorial. Go read the book Superforecasting. The people that Scott is recommending listening to are exactly the kind of people who do better at making short term forecasts than the majority of recognized experts. And the reason why their forecasts are better is exactly the same as why they got it right this time.

Quantitative thinking is something we as a society are still learning to trust. We've come to trust it in baseball. (Go read Moneyball.) We've come to respect it in politics. (Nate Silver established a following for a reason.) But we have yet to internalize as second nature the idea that we should sanity check everything with people who clearly think in a quantitative way. And that we should attempt that kind of thought ourselves.

In the long run I believe we will. And perhaps reflecting on who got COVID right and who got it wrong will help us get there sooner.



Did people "thinking independently" and "making their own projections" do a better job--on average, not just looking at the outliers--than the more established experts? I don't think we have any empirical data to support that claim.


Probably not but consider the likely reason: it seems there's no data right now of sufficient quality for making projections. The projections we have been seeing are all based on the same dodgy stats that we read about new problems with every day. A truly quantitatively minded person would evaluate the data quality and not try to build models to begin with.

I think I agree with btilly. A lot of people are upset right now by the lack of traditional deference to experts, by which they mean academics, not e.g. actuaries working for large reinsurers or even actual working doctors in many cases. In contrast I find it exciting.

What we're witnessing here is a total and complete democratisation of data and analysis. An entire planet's worth of brainpower is focused on this subject right now. Every day those people are using the internet to quickly sift through data, statistics, reporting and analysis to try and make sense of the chaotic picture with which we're presented. Some are trying to establish the bounds on what we know about the infection, others are looking at how to rapidly scale ventilator manufacture and so on.

If you've followed the replication crisis closely over the years, like I have, one of the overriding themes is how academic analysis is made brittle by:

1. The subtlety and trickiness of statistics.

2. The vast extent to which it's relied on despite this difficulty.

3. The relative lack of cross-discipline collaboration that could solve the combination of 1+2.

Lots of fields (not restricted to psychology) have been seriously battered by the collapse of whole research areas. I've heard that one reason VCs prefer to invest in software startups over biotech is the typical biotech startup begins with an academic paper, and around 50% of the time it doesn't replicate. Even AI has had replication issues!

One of the conclusions that's been drawn is that many academics don't have enough statistical training to do what they're trying to do, partly because statistics is genuinely extremely hard. In computer science the general maxim that you don't invent your own ciphers because they'll probably break is well known - everyone relies on standard cryptography made by a relatively small community of people these days. Advanced statistics feels about that level of difficulty to me, based on what I've seen, but everyone rolls their own stats. Perhaps there aren't enough statisticians to go round, but there's also a cultural issue. For instance academic modellers rarely work with professional programmers to productionise their models, a practice that's standard in the private sector.

So yeah, btilly has it right. We're seeing a slow, slow, veeerry slow recognition that truly quant thinking is so rare and highly valuable that it's more important than the background of the person. Finance already went through this process years ago: the clear-out of traditional finance types with traditional finance culture in favour of quants doing mathematical analysis is large done already. In other areas of the economy it's barely got started. In the UK Cummings did at least try to hire a superforecaster into Number 10, to his credit, but the media immediately went bezerk and engaged in a massive smear campaign. The guy just walked away in disgust. A superforecaster in government sure would be useful right about now :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: