Saturday, April 19, 2014

How to review a paper

Or, I should say, how I review a paper.

Peer review is a mess. We all know it, and have written about it endlessly, including myself. And we’ve also railed against a system in which we do all the work for the benefit of the publishers. But wait: if we are doing all the work, then we should be able to bend this system to our collective will, right? When we complain about bad reviews, just remember that we ourselves are the ones giving these terrible reviews. So our goal should be to give good reviews!

How do you do that? Here are some principles I try to think about and follow:

1. Don’t review papers you think will be bad. You can usually tell if a paper is low quality (and thus unlikely to make the cut) just by looking at the title and abstract. If you think it’s a waste of your energy, it probably is, and so don’t waste your time reviewing it. Somebody else will do it. Or not. It’s not your obligation to do so. Beware also the ego issue.

2. Give the authors the benefit of the doubt. The authors have been thinking about the problem for years, you have been thinking about it for a couple hours. So if you find a particular point confusing, either try to figure it out and really understand it, or just give the authors the benefit of the doubt that they did the right thing. They may not have, but so what? The alternative is worse, in my opinion. I hate getting the “The authors must check for XYZ if they want to make that claim”, when in fact we already did exactly that.

3. Stick to the evidence and the claims. I stay away from any of that crap about “impact". To me, that is an editorial responsibility. I try my best to just evaluate the evidence for the claims that the authors are making. If the authors claim something that they can’t based on their evidence, then I just say that the authors should alter their claim. And I try to say what they can claim, rather than just what they can’t claim. I generally do NOT propose new experiments to support their claim.

4. Do not EVER expand the scope of the paper. It is not your paper, and so it is neither your responsibility nor mandate to dictate the authors’ research priorities. The worst reviews are the ones in which the reviewers ask the authors to do a whole new PhD’s worth of studies. Often, these types of comments include vague prescriptions about novelty and mechanism. To wit, check out this little snippet from another reviewer for a paper I reviewed recently: "However, the major pitfall of current study is not to provide any novel information/mechanism behind the heterogeneity.” What is the author supposed to do with that?

5. Worth repeating–do not propose new experiments. Those reviews with a laundry list of experiments are very disheartening, and usually don’t add much to the paper. Remember that experiments are expensive, and sometimes the main author has left the lab and it’s very hard to even do the suggested experiments. Again, far better to just have them alter their claims in line with what their available data shows. I will sometimes explicitly say “The authors do not need to do an experiment” just to make that clear to the authors and the editor.

6. Be specific. If something is unclear, say exactly what it is that was confusing. If some claim is too broad, then give a specific alternative that the authors should consider (I’ll say “the authors could consider and discuss alternative hypothesis XYZ.”). This helps authors know exactly what they should do to make the paper go through. Also clearly delineate what are critical points and what are minor points.

7. Be positive and nice. Every paper represents a lot of someone’s blood, sweat and tears, usually a young scientist, and scathing reviews can be personally devastating. If you have to be negative (which is difficult, following the above rubric), then try not to phrase your review in terms like “I don’t know why anybody would ever do this”. Here’s an example of the opening lines from a review we got a while ago: "My opinion is that this manuscript is not very well thought through and of rather low quality. The authors' misconceptions are most obvious in their gross misstatement… [some relatively minor and inconsequential misstatement]”. Ouch. What’s the point of that? That review ended with “If [after doing a bunch of proposed experiments that don’t make sense] they find that [bunch of stuff that is irrelevant] they would begin to address their question.” It’s very belittling to basically imply that after all this work, we are still at square one. Not true, not positive and not nice.

8. (related) Write as though you were not anonymous. That will help with all the above issues.

One other note: I realize that for editors, even academic editors, the issues of novelty and impact are difficult to gauge and that they feel the need to lean on the reviewers for this information. Fine, I get that. But I will not provide it. Sorry.

Anyway, my main point is that WE run this system. It is within our power to change it for the better. Train your people and yourself to be better reviewers, and maybe we will all even be better people for it.

4 comments:

  1. Very interesting article. However, I'm not sure I understand your point 2: if the authors did address XYZ and made experiments to disprove it, they should at least mention it (with a "data not shown" if necessary).

    The alternative would be to let papers get out with potential gross mistakes undermining the conclusions. Perhaps the authors did consider XYZ, but as a reader 5 years later, I would have no way to know for sure. And perhaps they completely missed the problem raised by XYZ, and it would be a bad idea to let this paper just get out there without even mentioning it.


    But perhaps I just didn't get what you mean.

    ReplyDelete
    Replies
    1. Ah, sorry, perhaps that wasn’t clear. In point 2, I was just saying that one should give the author the benefit of the doubt. If the author did some experiment to address something, then for sure say that they should mention those experiments in their main text. I was just trying to say that I personally get very annoyed when reviewers make statements like “The authors should check for XYZ” when that’s already in supplementary figure 52, which they have clearly just not looked at. Fine, I don’t expect reviewers to read all of that stuff, but if a reviewer is going to say something, then they should be sure to check carefully that the authors really didn’t already do it. If they aren't going to check, then don’t bring it up.

      Delete
  2. arjun, this may be a bit dated as i found your blog recently but this is an imp topic, hence i shall comment. i appreciate your time & spirit in writing on an imp subject like peer-reviewing. but honestly do you think most big guys even review papers? except if it came from another big guy's lab and the paper is for nature or science? lets face it. most papers are reviewed by senior grad students/post-docs/junior faculty. therefore, by default they are v competitive. they are either on line for a job/tenure/fresh grant/grant renewal, and hence being "nice" (part of your #7) is the last thing on their mind.

    the main job of a reviewer should be objective (to me being nice is a part of being objective) but we all are humans at the end of the day. the first reaction that we get when we see a paper from stanford verses one from university of south bangalore is quite different. the real issue is the measurement of scientific productivity and how it’s done currently? can i get credit for the peer-reviewing of 5 manuscripts that i did last year which took me many weeks to read, digest and provide constructive criticism to the authors? the current system on how the scientific credit is assigned is not just broken, it sucks. we need to fix that first before talking about peer-review. once thats done, the rest will automatically follow. thank you. binay panda

    ReplyDelete
  3. I always feel difficult to write a review for I didn't know how to conclude my details together. Learned a lot here.

    ReplyDelete