Hacker Newsnew | past | comments | ask | show | jobs | submit | sTeamTraen's commentslogin

Hi - I am the subject of this article. I don't know how to prove that other than tweeting a link to this post, which I will do right now (renewiltord already posted my Twitter account here https://news.ycombinator.com/item?id=30247543).

Thanks for the kind words to those who posted them. For everyone else, feel free to AMA.

Edit: Here is the link to the tweet that links to this comment: https://twitter.com/sTeamTraen/status/1491041770217811977?s=...


Brown here... I dropped out of Engineering at least in part because the maths was too hard, and in CS I skipped several of the more maths-based classes and concentrated on programming, operating systems, and networks. Basically my maths never progressed a good high school level, and by 2011 it had certainly taken some steps back. I don't even consider myself to have the standard of an amateur mathematician. (It's only since I got into psychology, and hence regression, that I have learned what an eigenvalue is.)


I am the person featured in the Observer article (for "proof", see the account with this username on Twitter). AMA. :-)


What are your thoughts on the "rest" of Frederickson's research? You seem content to debunk only Losada's math (charitably characterized as a "brain fart"), where this article seems more willing to lambast positivity research on the whole as self-serving mumbo-jumbo.


At the time that interview was published I didn't know too much about it. Since then I have published about 5 articles critiquing various other work from the same lab. It mostly seems very weak, but I'm not sure if it's any worse than most of research in the wider field of social psychology (and peripheral subfields that use similar underpowered research methods). For example, I have also been involved in looking at the work coming out of the Cornell Food and Brand Lab (GIYF) and much of that seems to be catastrophically bad.



Can you give us a quick run-through of the maths that led to the 2.9013 number?


No, especially five years on. :-)

Seriously, there's no obvious way to do this in a "quick run-through" that is quicker than what we wrote. Look at the calculations in our article on arXiv (and if necessary, look up the 1962 Lorenz article and find the relevant equations). 2.9013 is actually an abbreviation for a recurring decimal representation of a rational number(!), which on its own ought to be a sign that something is wrong.


Thanks; the closest a quick search threw up was 22987/7923 = 2.9013000126 which confirms the bogosity - I was just interested to know how the bogosity was being justified.


Concerning the data duplication issue: look here http://steamtraen.blogspot.co.uk/2017/03/some-instances-of-a... (section E). You have two completely different studies with almost identical tables of results.

If this is what it looks like, that ought to be a career-ending problem. It is very hard to imagine an accidental explanation, particularly since the lead author stated, in a post that he later deleted but which is archived here http://web.archive.org/web/20170316133823/http://foodpsychol..., that "a master’s thesis was intentionally expanded upon through a second study which offered more data that affirmed its findings with the same language, more participants and the same results". The same results. To two significant figures. In 17 out of 18 cases. Sure.

And if you still want to separate out data duplication as being in some way a less serious problem: as a minimum, it means one of those two studies is very likely completely wrong.

Concerning "they don't sound like groundbreaking science that's likely to lead to policy changes anywhere": I agree that this work mostly sounds like fluffy BS that you might expect to see in a science fair project. But on the back of his reputation acquired precisely through these studies, the principal author became the leading authority on food policy (especially for school-age children) in the United States.


I'm not claiming the duplication issues aren't issues with his professional conduct. But from a purely scientific point of view, they do not change the results. This article is trying to make the claim that 1)this guy has issues in some of his publications, 2) some of this guy's publications were used for the "Smarter Lunches Program", so 3) the Smarter Lunches Program is flawed. The only attempt they make at connections between these three points is that the same guy is involved. They made no attempt to say the flawed publications were used to design the lunch program, and made no attempt to show the lunch program was flawed as a result.


I am one of these researchers (see https://twitter.com/sTeamTraen/status/913546338842939393 for a crude attempt to establish my authenticity, which doubtless would have zero validity for actual spies). Let me assure you that none of us is in league with the food industry. In fact since Dr. Wansink's work seems to go down pretty well with the food industry (cf. his various consulting gigs), I'm not sure what they would have to gain from us.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: