Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's an interesting heuristic, but without an exposition it's hard to know if there is any reason to believe it. Has this technique been assessed across various domains? Does it work better for political facts than for scientific facts? Where does it fail? It's a provocative article and a cool thought, but it seems more like a hypothesis.

(In particular, I'm referring to the assertion at the end of the article: "Because of the relatively high margin of 10%, there can be high confidence that the correct answer is No.")



One of the wiki's sources might have been a better linked article.

The MIT summary [1] notes "The researchers first derived their result mathematically, then assessed how it works in practice, through surveys spanning a range of subjects, including U.S. state capitols, general knowledge, medical diagnoses by dermatologists, and art auction estimates." Across all those areas, this technique had error rates about 20% lower than other competing techniques. Those techniques included simple majority vote to two different kinds of confidence-weighted scoring.

The paper earned a prestigious publication in Nature.

1: http://news.mit.edu/2017/algorithm-better-wisdom-crowds-0125


Awesome, thank you. And 'nacc found the link to the Nature article as well: https://news.ycombinator.com/item?id=20547787


One interesting thing about the paper is that it seems that the Wikipedia article incorrectly describes the procedure: respondents were not asked to guess whether the majority would agree with their position. They were asked to guess what per cent of other respondents would agree. I think that's a pretty severe difference in the method.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: