« Jobs Roundtable Recap |
| Holiday Break »
December 20, 2010
Putting Some Numbers on Peer Review
Since we've been talking about peer review on and off around here, this paper in PLoS One is timely. The authors are putting some numbers on a problem that journal editors have long had to deal with: widely varying reviews from different referees for the same exact paper.
It's a meta-analysis of 52 studies of the problem reported over the last few decades. It confirms that yes, inter-reviewer reliability is low. The studies that report otherwise turn out to have smaller sample sizes and other signs of lower reliability. The question now is: to what extent is this a problem?
One of the studies they quote maintains that too high a level of agreement would also be the sign of a problem (that some of the reviewers are redundant, and that the pool of referees might have been poorly chosen). I'm willing to think that total agreement is probably not a good thing, and that total disagreement is also trouble. So what level of gentlemanly disagreement is optimal? And are most journals above it or below?
FIguring that out won't be easy. Some journals would really have to open their books for a detailed look at all the comments that come in. I assume that there are editors who look over their reviewers, looking for those that tend to be outliers in the process. (Um, there are some editors that do this, right?) But that takes us back to the same question - do you value those people for the perspective they provide, or do you wonder if they're just flakes? Without a close reading of what everyone had to say about the crop of submissions, it's hard to say. Actually, it might not be easy, even then. . .
+ TrackBacks (0) | Category: The Scientific Literature
POST A COMMENT
- RELATED ENTRIES
- How Not to Do It: NMR Magnets
- Allergan Escapes Valeant
- Vytorin Actually Works
- Fatalities at DuPont
- The New York TImes on Drug Discovery
- How Are Things at Princeton?
- Phage-Derived Catalysts
- Our Most Snorted-At Papers This Month. . .