1. the parent raised questions in a neutral way. these questions seem essential for validating experimental design. why would peer reviewers present such questions in passive-aggressive ways, and how can we fix this?
2. could you kindly recommend services/consultancies to validate experimental designs? if not, would you be open to consulting and doing what you did here -- suggesting ways to control for key variables? experiments relate to cancer research. contact info in bio.
> 1. the parent raised questions in a neutral way. these questions seem essential for validating experimental design. why would peer reviewers present such questions in passive-aggressive ways, and how can we fix this?
Peer review is obviously a complex and controversial issue, but some key points (at least in life/medical sciences) include:
A. Your reviewers are very often your competition. Reviewers are supposed to be subject matter experts in your area of research, and academic science is a small world. The other subject matter experts are exactly the people competing with you for grants, to finish projects first, for trainees, prizes, etc. (You can typically ask that specific people do/don't review your paper, but it's at the editor's discretion. Some fields are simply too small to take such preferences into account.) You can often identify your (supposedly anonymous) peer reviewers because they respond with a critique that your paper should cite some specific papers, and they are the common name on the bylines of those papers.
B. Peer review is uncompensated work by academics, very often done for for-profit publishers. Hard to be thrilled with that paradigm (though some scientists feel it's a reasonable 'academic duty').
C. The mindset in peer review is often more about gatekeeping the journal hierarchy than about simply ensuring good experimental design. Publishing in top journals is often a career-making achievement. It is incredibly common for reviewers to ask for (often very time consuming) additional experiments based on their opinions about what is interesting, what a paper in the journal should look like, etc. There is a bias where reviewers don't want someone else to have an easier time publishing in X journal than they did.
D. Not exactly a critique of peer review, but I think it's important to realize that peer review is not even intended to address one of the major problems in science – irreproducible results and/or scientific fraud. Reviewers have to take all data presented on face value. At best, peer review is simply a check against poor experimental design, errors in reasoning, and authors making claims stronger than what the data supports.
> 2. could you kindly recommend services/consultancies to validate experimental designs? if not, would you be open to consulting and doing what you did here -- suggesting ways to control for key variables? experiments relate to cancer research. contact info in bio.
As mentioned, CROs are the companies in this space, though I'm not familiar with any that focus specifically on vetting experimental design.
Well, that's the fundamental difference between tech and science. With tech the 'truth' is entirely instrumental - is the product useful? (Not entirely accurate, as a product could be far less useful than the alternatives but still a commercial success).
In science sometimes the goal is instrumental value, but more often it is inferential insight where there isn't a simple 'it works or it doesn't' truth value and the role of methodology and review to control for sources of false positives and false negatives, misconduct, and unwarranted interpretation of data are important.
I'd argue that peer review aids breakthrough science overall, because where shoddy but splashy research slips through review, sometimes years of research effort and funding get funneled into avenues opened by such putative breakthroughs that turn out to have been bullshit all along. The misdirection into dead ends has the opportunity cost of the potential of making real breakthroughs.
> 2. could you kindly recommend services/consultancies to validate experimental designs? if not, would you be open to consulting and doing what you did here -- suggesting ways to control for key variables? experiments relate to cancer research. contact info in bio.
They are called CROs (contract research organizations). Pay them money and they will work on your experiments for you.
1. the parent raised questions in a neutral way. these questions seem essential for validating experimental design. why would peer reviewers present such questions in passive-aggressive ways, and how can we fix this?
2. could you kindly recommend services/consultancies to validate experimental designs? if not, would you be open to consulting and doing what you did here -- suggesting ways to control for key variables? experiments relate to cancer research. contact info in bio.