Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In the group with lymphoma, 21 percent were tattooed (289 individuals), while 18 percent were tattooed in the control group without a lymphoma diagnosis (735 individuals).

Sure, that’s a meaningful percentage difference, but with the relatively small sample size and different `n` values, I don’t make much of this at the moment.



I suggest you read 'Study Size' under 'Methods'. Detecting an odds ratio of 1.3 with 80% power, and rounding way up to 3000 cases. To me that's not a small sample size, but definitely not the highest OR to aim for. Different cases vs. controls is not a problem for this study design. You pointing that out as a negative makes me think you might not know as much about epi study designs as your comment lets on


I looked up this journal, eClinicalMedicine, and it would be considered a pretty high-end medical one (impact factor = 15). However, this finding indeed seems like rubbish. The p-value for their central claim is p = .03. When bold claims come with these types of p-values, they generally don't replicate. I didn't look into what questionnaire the authors used, but they may have very well tried a bunch of correlations and this is what stuck.

It's surprising that this type of stuff can still get published in such high journals. This just makes me think that the field of medicine is failing to grapple with its replication issues. The social sciences get more heat for bad research, but I can't imagine that this type of stuff would fly today in a remotely comparable Psychology journal.

However, to be fair to the authors, those individual numbers you point out are the quantity with lymphoma and it would be more proper to say the sample sizes were n = 1398 for the tattoo group and n = 4193 for controls. There's also nothing really wrong with having unbalanced samples here. It's either be unbalanced or throw out control data... regardless, the barely significant p-value is the biggest concern. (If you're wondering how to judge a study's robustness, the easiest and generally most effective way is to just look at the p-values).


Those look like sufficient sample sizes to me. How many samples would it take to convince you?


Base rate fallacy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: