Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sigh.

"British semi-retired network engineer educated in mathematics at Cambridge finds mathematical flaw in popular psychology paper and enlists Alan Sokal's help to debunk it."

Sure wish folks would quit sucking on the "amateur vs. the establishment" teat.

That aside, this was a fun read, and I think it's another example of why scientific progress in the future is going to have to be more multidisciplinary than it has been in the past.



Educated in Engineering and CS at Cambridge, courses which employ far less intimidating mathematics than the Maths Tripos for which Cambridge is renowned. I think Brown could reasonably claim to be an ameteur Mathematician - and certainly an ameteur Psychologist.


Brown here... I dropped out of Engineering at least in part because the maths was too hard, and in CS I skipped several of the more maths-based classes and concentrated on programming, operating systems, and networks. Basically my maths never progressed a good high school level, and by 2011 it had certainly taken some steps back. I don't even consider myself to have the standard of an amateur mathematician. (It's only since I got into psychology, and hence regression, that I have learned what an eigenvalue is.)


I think it means he was an amateur at psychology, which does make it appropriate.


Sure, but he didn't find a flaw in the psychology presented in the paper, he found a flaw in the mathematics used in the paper, which he understood better than the paper's author(s).

This isn't to detract from what he did -- I'm glad he caught it and that it appears he's since made a hobby of catching others.

The problem, as other supporting replies to my comment point out, is that the notion that hobbyists working outside of academia can defeat established science has recently become a political tire fire, and this otherwise excellent article is being framed to add fuel to that fire.

It used to be that this was sort of restricted to young Earth creationists and other people that you could sort of roll your eyes at and ignore, but now this attitude has been attacking climate science and energy policy and environmental science, and now we've got an administration that has put people in positions to influence science policy in this country on the basis of politics alone. Now it's not so funny anymore.

The takeaway from the article should be on the increasing importance of a multidisciplinary approach in the sciences -- something that data geeks at HN should enthusiastically support -- and not this "amateur defeats scientists" hogwash.


> Sure wish folks would quit sucking on the "amateur vs. the establishment" teat.

I think it's an awesome narrative. What do you dislike about it?


The problem I have with it is that it helps to legitimize the view that an uneducated person's opinions and intuitions are more valid than expertise. In the current political climate, it's dangerous to continue stoking that narrative. As the parent said, the guy was hardly an amateur, but an antivaxer/global warming denier/etc. will view it as confirmation that establishment science is corrupt.


> it's dangerous to continue stoking that narrative.

What, even in cases where it is right? You are steering dangerously close to saying "We the Elite should close ranks". And that sort of thing is the fuel of the very fire that you are trying to put out.

The fact is as humans we have no option but to trust our own judgement on things: either by studying them directly or by judging which experts to trust. And this article shows you can't assume experts are right just because society puts them in a particular position.

In the article we told that Nick Brown, an IT guy who knew high-school level maths, who discovered that an important paper by well respected psychology prof was BS when it was presented to him at a university psych class.

Why had the profession swallowed it? Apparently because their maths is so bad that they just swallowed what they were told, and (according to The Guardian) the few with qualms didn't feel anyone would be interested in hearing the contrary view.


I don't think it's necessarily that everyone in the whole psychology profession has worse than highschool level math skills. I think it's more just that, at some level, you have to trust that your colleagues have done their work correctly. You focus on the important parts of the paper (ie. the hypothesis, the method, and the results) because there's not enough time in your life to give full scrutiny to every detail of every paper you read. So sometimes, something slips through the cracks, until it's caught.


At which level in the sciences are you ever supposed to trust that your colleagues have done their work correctly? That sounds more like the opposite of what you are supposed to be doing.


At the level where it takes more than one lifetime's research to reinvent your field from ground zero, it's literally impossible to progress the field without taking something on faith.

Well before that, though, it becomes impractical to re-check every single thing. And the longer-established a field is, the more you have to take on faith.


So then you have to go to other people in that field and have them look at it. Sure, you have to take them on faith too, but a little bit less each time unless they all wildly disagree, and they can explain their reasoning to you in layman terms when you canvas their opinion. Now, none of this guarantees success, but it is a long way from trusting others to just do their work properly.


At the level where you cite the work of others.


I'm reminded of the Russian proverb; "Trust, but verify".

A citation does generally mean that you hold a given paper with a certain level of trust, but that isn't the same as having trusted the people who wrote it to have done their job properly.

The trust in the paper could be because you trust the folk who wrote it to have gone and done their job properly, however the trust could also be from having actively gone and checked the working.


Similarly, "Trust - but take care in whom."


What do you recommend? Without a criterion for trusting some previous work this would make science impossible.


"uneducated" is committing to the same fallacy that you are trying to argue against. Part of the current political climate is precisely about the lack of information, arguments etc. You might call that a concern of education, but the real problem is learning, which is a far broader problem than supervised education.


The problem comes from people conflating the term 'amateur' (does not pursue a paid-for career in the chosen field) with 'ignorant' or 'unskilled'.


The Industrial Revolution came about because The Enlightenment required that every argument was judged on its merits.


There's a lot of pseudoscience out there like flat earthism, homeopathy, dowsing and what not. The belief that "the establishment" is ignorant and narrow-minded and could be proven wrong by a determined individual willing to challenge those beliefs is what enables these beliefs in the first place.

Skepticism and curiosity are healthy, driving forces of science that can be taken too far to lose touch with reality.


In addition to the earlier points, I find a lot of the reporting on the misuse of statistics in psychology misuse statistics. It's a narrative, so "psychological studies are underpowered" becomes "ARE ALL PSYCHOLOGY STUDIES FAKE????"


In fairness, you can find psychologists seriously asking that question too, going back decades.


I guess I meant in regards to studies that suggest psychology studies are underpowered (there have been quite a few in recent years), reporting often says that the studies have been disproved, which most of the time isn't statistically correct.


Not just that.

> Of course, psychologists often use their research findings to speak about how psychological processes function. But, such inferences represent a basic misinterpretation of our analyses.

> Professor and Chair of the Department of Psychology... at the American University of Paris

https://blog.oup.com/2017/08/psychology-silent-crisis/


I agree that some things in academia are overly fenced off from the influence of an educated everyman but I don't think that applies in this scenario. The article is specifically about how the student recognized a problem but also recognized that they had particular shortcomings that made tackling the problem adequately on their own impossible.


>Sure wish folks would quit sucking on the "amateur vs. the establishment" teat.

That's just a counter-balance on the enormous "sucking on the establishment's teat" going on -- especially when the establishment is such vacuous and BS as "soft sciences".


Hey, as if soft sciences in the broadest sense weren't hard. They are hard to learn, because they are dealing with hard problems, for example how to raise a child or teach a whole heterogeneous classroom with minimal effort?

It's certainly not an ideal name.


Soft not as in "easy to learn", but as in "not very solid".


Fluid dynamics? :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: