Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Jobs Roundtable Recap | Main | Holiday Break »

December 20, 2010

Putting Some Numbers on Peer Review

Email This Entry

Posted by Derek

Since we've been talking about peer review on and off around here, this paper in PLoS One is timely. The authors are putting some numbers on a problem that journal editors have long had to deal with: widely varying reviews from different referees for the same exact paper.

It's a meta-analysis of 52 studies of the problem reported over the last few decades. It confirms that yes, inter-reviewer reliability is low. The studies that report otherwise turn out to have smaller sample sizes and other signs of lower reliability. The question now is: to what extent is this a problem?

One of the studies they quote maintains that too high a level of agreement would also be the sign of a problem (that some of the reviewers are redundant, and that the pool of referees might have been poorly chosen). I'm willing to think that total agreement is probably not a good thing, and that total disagreement is also trouble. So what level of gentlemanly disagreement is optimal? And are most journals above it or below?

FIguring that out won't be easy. Some journals would really have to open their books for a detailed look at all the comments that come in. I assume that there are editors who look over their reviewers, looking for those that tend to be outliers in the process. (Um, there are some editors that do this, right?) But that takes us back to the same question - do you value those people for the perspective they provide, or do you wonder if they're just flakes? Without a close reading of what everyone had to say about the crop of submissions, it's hard to say. Actually, it might not be easy, even then. . .

Comments (21) + TrackBacks (0) | Category: The Scientific Literature


COMMENTS

1. SP on December 21, 2010 8:54 AM writes...

I bet the reviews on that paper were funny.

Permalink to Comment

2. Special Guest Lecturer on December 21, 2010 10:06 AM writes...

I believe the reviews should be published alongside the paper. It would provide transparency into what issues were raised (and what were not) by the reviewers and also protects the authors from political dirty tricks.

Permalink to Comment

3. Pete on December 21, 2010 12:13 PM writes...

Special Guest Lecturer (commentator #2) makes an excellent suggestion. If the journals would allow commnents to be posted people could discuss both the article and the reviewer comments.

Permalink to Comment

4. JimB on December 21, 2010 1:14 PM writes...

Unfortunately most of the comments would be along the line of...

"How is this JACS worthy? An Org Lett at best, maybe even a Tet Lett!"

Rather than suggested experiments, useful applications or tried it and it doesn't work.

Permalink to Comment

5. Not-Medchem on December 21, 2010 1:51 PM writes...

We don't know what the basis of disagreement is on any review. Even significant disagreement might actually be good. It depends what it is.
For example, if reviewers differ very widely in what problems are identified you could argue that they were the wrong reviewers or that they didn't give proper diligence.

If on the other hand all raise the same issues but they differ widely in personal interpretations then we have a very different story.

It seems that we have several different cases, most assuming editors are overloaded:

1. Reviewers were capable, but had no time and were cursory in their review, especially when reviewing a well known lab. I sort of hope the nevirapine optical activity story lies here.

2. Reviewers were thoughtful, but not from a relevant experience set. The arsenate/phosphate case might fit here.

3. Reviewers were used because someone said you have to, not because they want to actually check anything. I wonder if some of the retracted high profile papers in places like Science / Nature fit in here? Or, would that be #1?

4. Reviewers were incompetent and editors had no time to really check

5. Reviewers were incompetent, editors were absent and it is basically a vanity press revenue stream for the publisher.

Unfortunately there are too many of #5. And as long as they count toward promotion and tenure, or funding, the system will only get worse.

You could argue that #1-#4 are not much better.

I have a suggestion though. What about a primary / secondary publication model for each journal, not A-list journals vs the rest? Publish one article but divide it into the single most important discovery for the primary part and everything else goes into secondary material. In essence, push much more into supplementary information.

If there is a follow on article it would simply get tacked onto the original unless there was a new main point at least as important as the first.

It still needs review but the effort could get pushed into the main point first, put that out and follow up with review of the other points later. The 'secondary' review could even be done in a social network context, including deciding to split follow up papers out.

I echo here part of the sentiment of posting reviews. Unfortunately those ivory towers are full of sharp horns...

Permalink to Comment

6. none on December 21, 2010 2:56 PM writes...

Reviewers were incompetent, editors were absent and it is basically a vanity press revenue stream for the publisher.

You mean like PLoS ONE?

Permalink to Comment

7. newnickname on December 21, 2010 3:42 PM writes...

On some aspects of a submission, it isn't a question of too much agreement or too much disagreement, it's a question of getting it right.

There should be 100% agreement on most matters of data, it's analysis, interpretation and presentation with relevant precedents. How many of you have had to point out that the "new peak" in the NMR was not proof of a breakthrough discovery but residual solvent?

Whether an article, once vetted, is suited to a particular venue is more subjective.

Which reminds me of so many "fun" journals in the past history of chemistry. Chimia used to allow occasional chemical "joke" articles ... on purpose, that is. I think Nakanishi used to joke with, I think Nozoe, about publishing their mss's in the Journal of Nozoe Chemistry. (I think it was Nozoe.)
Journal of

Permalink to Comment

8. CR on December 21, 2010 3:53 PM writes...

I like the idea of publishing the reviewers comments along with the article itself. I would not want open comments - we already have blogs for that. But it would be interesting to see what each reviewer (anonymously of course) thought about the manuscript; as well as the authors rebuttal.

I remember reviewing an article for BMC and gave it a "major revision" label only to have an email not 2 days later saying it was accepted with no revisions. I emailed the editor and was told the author and he spoke about the article and that the editor agreed with the author. Times like this it would be interesting to see the reviewer comments.

Permalink to Comment

9. trrll on December 21, 2010 4:51 PM writes...

What this tells us is that with only 2 or 3 reviewers, whether or not a paper is accepted will tend to be dominated by luck of the draw. I don't think that this will surprise much of anybody.

Permalink to Comment

10. Maks on December 21, 2010 7:12 PM writes...

EMBO actually publishes all the correspondence between the reviewers and the authors:
http://www.nature.com/emboj/about/process.html

Hopefully others will follow......

Permalink to Comment

11. stupid suggestion on December 21, 2010 7:44 PM writes...

What if we have a more social publishing forum, e.g. reddit model? Yes, there may be some bad work published, but that's what the group moderation is for... keeping each other on our toes. We'd have to learn to be polite and provide constructive criticism.
Funding to host the articles (incl. maintenance) might be an issue; wonder if advertising from suppliers on the page might generate enough revenue?

Permalink to Comment

12. BSW on December 21, 2010 7:58 PM writes...

I would ague not only for open reviewing and posting of reviews, but for a comment thread on publication. However, I would want commenters to be vetted. If users are verified as being genuine members of an industrial firm, little startup, academic institution, or govt lab, and they post under their own names, you will get far less "Your MOM! Oh, YEAH?!? YOUR Mom!" commentary, and a lot more of the sorts of "see how smart I am" useful suggestions like "have you considered this experiment?". Now, it won't ensure perfect civility (even at meetings, where everyone knows who you are, some people act like jerks), but it helps. Sure, it's added work to have people register in such a way that they can be verified, but not that much work. Our community of inorganic chemistry educators verifies "faculty users" for certain privliges on the site (non-faculty members can do most things on the site, but things like solution keys are faculty only), and it's doable, especially for an institution with a subscription system already set up.

Permalink to Comment

13. Vrgil on December 21, 2010 11:10 PM writes...

The "open" review process breaks down very quickly, when one is forced to write a comment such as this to the editors...

"While this paper is interesting, the authors have been banging away at this same experimental model for 20 years, and my gut reaction is "boring". Yes, it's technically correct. No, there's nothing wrong with the science. It's just more of the same from the same group and it's an easy paper for them. They're capable of more than this same old drivel".

Such a comment might be misconstrued by an author as "politics" being the reason their paper got rejected, when in fact all of the above is a perfectly legitimate reason to reject a paper. I don't think I'd be happy reading such a comment, either as an author or a casual viewer.

Whatever you say about the current system, it works 99.9% of the time. Now if we could just deal with the other 0.1% that slip through the cracks....

Permalink to Comment

14. Ich Dich on December 22, 2010 2:48 AM writes...

I have given anonymous submission some thought. A paper by "big name in chemistry" will be accepted much easier, because of the trustworthiness of his name. Although we all look at the origin of the paper while we referee it to asses its value, this knife cuts two ways. Often, a "big name" will publish similar results in a higher level journal than, lets say, a start-up lab from India.
Obviously, the name and origin of the authors can have a large effect on peer-review, both positive as negative. I have noticed that more and more journals are beginning to ask for "not preferred as referees", so that authors can avoid competition issues.

I have no clear-cut answer for this, but still, its hard to judge only the pure scientific results, when we all look at the names in the manuscript.

Permalink to Comment

15. cliffintokyo on December 22, 2010 4:36 AM writes...

Agree reviewer comments could be published, as long as the author's response is also published.
As #11, being polite and constructive are of course essential.

Permalink to Comment

16. Iridium on December 22, 2010 7:54 AM writes...

One thing should be noted regarding reviewing.
Many professor do referee work for more than 10 journals. That means 1-2 paper a day. That is a lot of work besdide your normal work.
I do not have suggestion about it but I think it is important to include it in the discussion.

Published reviewer comments.

I think reviewer comments should be pubblished:
- I would not publish the comments on novelty and importance of the article. We could have a non-disclosed section of referee repost.
Also things like "poorly written", "sloppy work" and "why do you send me this b..." could go in that section.
- Only "scientific comments" could be published. Key suggestions, missing experimental data, motivated concerns on mechanisms etc.

I also vote for keeping anonymous the reviewer:
- many professor love to read themselves and would add 50 comments to show how good they are and/or avoid their review performances to be later questionated by the readers.
- at the same time, knowing the name of the person rejecting your papers, on the long run, might start some really nasty "accademic secret war".

Permalink to Comment

17. Anonymous on December 22, 2010 11:04 AM writes...

@Vrgl "While this paper is interesting, the authors have been banging away at this same experimental model for 20 years, and my gut reaction is "boring". Yes, it's technically correct. No, there's nothing wrong with the science. It's just more of the same from the same group and it's an easy paper for them. They're capable of more than this same old drivel".

This is NOT a legitimate reason to reject a paper. The science needs to stand alone, regardless of who the authors are. If the same work would be publishable if from a group who is new to the field, it should be equivalently publishable from a senior group that is cranking the wheel. The fundamental question is whether the science itself is worthy of publication, not whether it was easy or hard for any particular group. This is just the kind of comment/opinion that an open review process would ferret out in a useful way.

Permalink to Comment

18. Anne on December 22, 2010 4:02 PM writes...

I don't know that you could necessarily apply this model to reviewers for a journal, but when you propose for telescope time, the proposal comes back with, in addition to a yes/no, anonymized reviewer comments. These comments are usually just a few sentences about the proposal ("Valuable result if detected" or "Unclear how the amount of time to request was chosen" or whatever), but they are accompanied by a numerical rating and, interestingly, a mean and standard deviation for all that reviewer's ratings. So if a reviewer rates everything one out of five except for a few good ones that get two, you can tell. And as I understand it the TAC normalizes all the scores before combining them to make their decision. So to some extent reviewers' habits are controlled for.

Permalink to Comment

19. heteromeles on December 23, 2010 8:25 PM writes...

A couple of notes:

#5: There is Category 6: reviewer is defending their own field of work by trashing your paper (and similar intra-field politics) I stumbled into that by accident, when I didn't realize that a lab was trying to corner the market on a particular line of research. Both prof and grad student "independently" reviewed the work, and both trashed it.

As for reviews, I'm not fond of publishing them. Some of the best reviews I've received (or given) for that matter, were ones that improved the paper. Publishing a list of paragraphs clarified and typos caught is not useful. Nor is publishing a "good paper, publish it" style review.

Permalink to Comment

20. cliffintokyo on December 27, 2010 4:45 AM writes...

@15 revisited
I should have added that I agree review comments should only be published anonymously, to avoid *academic wars* (#17; usually not so secret!)
And yes, only scientific appraisal, not technicalities. Reviews would probably need to be drastically edited to eliminate the casual put-downs, etc. mentioned in the comments above.
Perhaps the review form could be designed to have a 'scientific appraisal overview' section?
I would be happy for most my comments as a reviewer (pre-electronic; revealing my age here!) to have been made *available*.

Permalink to Comment

21. HappyDog on December 29, 2010 9:41 AM writes...

Can I make a rant about publishers?

In the past month, I've had two papers, appearing in journals by different publishers that apparently ignored the corrections I made on the galley proofs.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
What If?
Novartis Impresses Where Others Have Failed
Exelixis Against the Wall
A Last Summer Day Off
The Early FDA
Drug Repurposing
The Smallest Drugs
Life Is Too Short For Some Journal Feeds