I am very much in favor of preprints and open review, but something I listened to on Planet Money recently gave me some food for thought, along with a recent poll I tweeted about re-reviewing papers. The episode was about wisdom of the crowds, and how magically if you take a large number of non-expert guesses about, say, the weight of an ox, the average comes out pretty close to the actual value. Pretty cool effect!
But something in the podcast caught my ear. They talked about how when they asked some kids, you had to watch out, because once one kid said, say, 300 pounds (wildly inaccurate), then if the other kids heard it, then they would all start saying 300 pounds. Maybe some minor variations, but the point is that they were strongly influenced by that initial guess, rather than just picking something essentially purely random. The thing was that if you had no point of reference, then even a guess provides that point of reference.
Okay, so what does this have to do with peer review? What got me thinking about it was the tweet about re-reviewing a paper you had already seen but for a different journal. I'm like nah not gonna do it because it's a waste of time, but some people said, well, you are now biased. So… in a world where we openly and asynchronously review papers (preprints, postpub, whatever), we would have the same problem that the kids guessing the weight of the cow did: whoever gives the first opinion would potentially strongly influence all subsequent opinions. With conventional peer review, everyone does it blind to the others, and so reviews could be considered more independent samplings (probably dramatically undersampled, but that's another blog post). But imagine someone comments on a preprint with some purported flaw. That narrative is very likely to color subsequent reviews and discussions. I think we've all seen this coloring: take eLife collaborative peer review, or even grant review. Everyone harmonizes their scores, and it's often not an averaging. One could argue that unlike randos on the internet guessing a cow's weight, peer reviewers are all experts. Maybe, but I am somehow not so sure that once we are in the world of experts reviewing what is hopefully a reasonably decent paper that there's much signal beyond noise.
What could we do about this? Well, we could commission someone to hold all the open reviews in confidence and then publish them all at once… oh wait, I think we already have some annoying system for that. I dunno, not really sure, but anyway, was something I was wondering about recently, thoughts welcome.