Monday, May 6, 2019

Wisdom of crowds and open, asynchronous peer review

I am very much in favor of preprints and open review, but something I listened to on Planet Money recently gave me some food for thought, along with a recent poll I tweeted about re-reviewing papers. The episode was about wisdom of the crowds, and how magically if you take a large number of non-expert guesses about, say, the weight of an ox, the average comes out pretty close to the actual value. Pretty cool effect!

But something in the podcast caught my ear. They talked about how when they asked some kids, you had to watch out, because once one kid said, say, 300 pounds (wildly inaccurate), then if the other kids heard it, then they would all start saying 300 pounds. Maybe some minor variations, but the point is that they were strongly influenced by that initial guess, rather than just picking something essentially purely random. The thing was that if you had no point of reference, then even a guess provides that point of reference.

Okay, so what does this have to do with peer review? What got me thinking about it was the tweet about re-reviewing a paper you had already seen but for a different journal. I'm like nah not gonna do it because it's a waste of time, but some people said, well, you are now biased. So… in a world where we openly and asynchronously review papers (preprints, postpub, whatever), we would have the same problem that the kids guessing the weight of the cow did: whoever gives the first opinion would potentially strongly influence all subsequent opinions. With conventional peer review, everyone does it blind to the others, and so reviews could be considered more independent samplings (probably dramatically undersampled, but that's another blog post). But imagine someone comments on a preprint with some purported flaw. That narrative is very likely to color subsequent reviews and discussions. I think we've all seen this coloring: take eLife collaborative peer review, or even grant review. Everyone harmonizes their scores, and it's often not an averaging. One could argue that unlike randos on the internet guessing a cow's weight, peer reviewers are all experts. Maybe, but I am somehow not so sure that once we are in the world of experts reviewing what is hopefully a reasonably decent paper that there's much signal beyond noise.

What could we do about this? Well, we could commission someone to hold all the open reviews in confidence and then publish them all at once… oh wait, I think we already have some annoying system for that. I dunno, not really sure, but anyway, was something I was wondering about recently, thoughts welcome.

2 comments:

  1. Interesting post. What I like about open peer review is that it is useful for teaching other reviewers what they might have missed in their own reviews. (Of course, it could also show them the nasty side of peer review, but I rarely see that in the journals that I have served as Editor for.)

    Also, I have a slightly different example of this kind of bias. I was asked to review a review paper that was in one of those online discussion journals. Because the authors would figure out who I was, I chose to be not anonymous. I thought the review paper was worthwhile, but there were many flaws in the content and writing that required extensive revisions. If done right, this would probably occupy many months of the authors' time, so I recommended rejection and resubmission after all the flaws were fixed.

    Friends and colleagues of the coauthors starting posting reviews about how important this paper was to publish, most neglecting my point that it was poorly written and needing substantial work.

    In this case, the open peer-review process allowed a camp to form around support for the manuscript. I don't know if the Editor took these comments into account in the decision.

    ReplyDelete
  2. OMG this is why i never read movie reviews for movies i'm excited about beforehand; they always prejudice me toward/against the movie!!!

    ReplyDelete