> They just surveyed some college students and drew conclusions by running statistical analyses on the data until they got something that seemed significant.
Is this just cynicism or based on anything? From reading the methods section it doesn't appear this is what happened
> We used a mixed methods approach. First, qualitative data were collected through 41 exploratory, in-depth interviews (women: n=19, 46.3%; men: n=21, 51.2%; prefer not to disclose sex: n=11, 2.4%; mean age 22.51, SD 1.52 years) with university students who had experience playing Super Mario Bros. or Yoshi. Second, quantitative data were collected in a cross-sectional survey…
So interviews with a biased sample (students with experience playing the game) and then a survey.
Also, try adding up those n= numbers. They don’t sum to 41. The abstract can’t even get basic math or proofreading right.
If the body of the paper describes something different than the abstract, that’s another problem
EDIT: Yes, I know the n=11 was supposed to be an n=1. Having a glaring and easily caught error in the abstract is not a good signal for the quality of a paper. This is on the level of an undergraduate paper-writing exercise, not a scientific study as people are assuming.
Seems like n=11 should have been n=1. Use 19, 21, and 1 as a numerator of /41 and you end up with all the same percentages written in the abstract. A typo that should have been caught, but surely nothing more than that and certainly not substantive enough to qualify the claim below:
> This paper is very bad. The numbers in the abstract don’t even add up, which any reviewer should have caught.
> A typo that should have been caught, but surely nothing more than that and certainly not substantive enough to qualify the claim below:
Such an obvious error should have been caught by the authors proofreading their own work, to be honest. Any reviewer would also catch it when evaluating the quality of the sample size.
I find it strange that people are bending over backward to defend this paper and its obvious flaws and limitations.
It does seem to be cynicism, they're convinced the authors "gave people surveys with a lot of questions and then tried to find correlations in the data", but nothing indicates they did more than the 9 questions (plus one more for sex as a control) the paper includes, and restricted it to only Mario/Yoshi players. Ten questions is pretty short.
Do you not see the problem with drawing conclusions from a sample set that pre-selects for Mario/Yoshi players?
How do you think they’re determining that playing Mario/Yoshi prevents burnout if they only surveyed Mario/Yoshi players?
I really don’t understand all of the push to support this paper and disregard critiques as cynicism. The paper is not a serious study, or even a well written paper. Is it a contrarian reflex to deny any observations about a paper that don’t feel positive or agreeable enough?
I've critiqued it plenty in other comments, including that exact issue. However, that doesn't mean they "gave people surveys with a lot of questions" to p-hack, it seems like a study designed (albeit not well designed) to test one specific hypothesis. I see no reason to question that they did the methods as described in the paper, which were designed to test this very specific thing (they didn't even test "childlike wonder" in general, just self-reported Mario-induced childlike wonder), but their conclusions aren't supported by their data. If they were p-hacking as you accuse them of, why not have more questions? Why not survey non-Mario players too so there's a new variable to create significant results out of a null?
Is this just cynicism or based on anything? From reading the methods section it doesn't appear this is what happened