Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Finally, An Alternative to Palladium. About Time. | Main | Engaging the Public? »

November 5, 2010

Peer Review's Problems

Email This Entry

Posted by Derek

Over at Ars Technica, here's an excellent look at the peer review process, which I last spoke about here. The author, Chris Lee, rightly points out that we ask it to do several different things, and it's not equally good at all of them.

His biggest problem is with the evaluation of research proposals for grants, and that has indeed been a problem for many years. Reviewing a paper, where you have to evaluate things that other people have done, can be hard enough. But evaluating what people hope to be able to do is much harder:

. . .Reviewers are asked to evaluate proposed methods, but, given that the authors themselves don't yet know if the methodology will work as described, how objective can they be? Unless the authors are totally incompetent and are proposing to use a method that is known not to work in the area they wish to use it, the reviewer cannot know what will happen.

As usual, there is no guarantee that the reviewer is more of an expert in the area than the authors. In fact, it's more often the case that they're not, so whose judgement should be trusted? There is just no way to tell a good researcher combined with incompetent peer review from an incompetent researcher and good peer review.

Reviewers are also asked to judge the significance of the proposed research. But wait—if peer review fails to consistently identify papers that are of significance when the results are in, what chance does it have of identifying significant contributions that haven't yet been made? Yeah, get out your dice. . .

And as he goes on to point out, the consequences of getting a grant proposal reviewed poorly are much worse than the ones from getting a paper's review messed up. These are both immediate (for the researcher involved) and systemic:

There is also a more insidious problem associated with peer review of grant applications. The evaluation of grant proposals is a reward-and-punishment system, but it doesn't systematically reward good proposals or good researchers, and it doesn't systematically reject bad proposals or punish poor researchers. Despite this, researchers are wont to treat it as if it was systematic and invest more time seeking the rewards than they do in performing active research, which is ostensibly where their talents lie.

Effectively, in trying to be objective and screen for the very best proposals, we waste a lot of time and fail to screen out bad proposals. This leads to a lot cynicism and, although I am often accused of being cynical, I don't believe it is a healthy attitude in research.

I fortunately haven't ever had to deal with this process, having spent my scientific career in industry, but we have our own problems with figuring out which projects to advance and why. Anyone who's interested in peer review, though, should know about the issues that Lee is bringing up. Well worth a read.

Comments (22) + TrackBacks (0) | Category: The Scientific Literature | Who Discovers and Why


COMMENTS

1. Virgil on November 5, 2010 9:03 AM writes...

I guess I won't be the only person to say it, but here goes anyway...

The peer review system is, unfortunately, the best of a bad lot. Until someone can come up with a system that is better (measured how?) then we're pretty much stuck with it. Such a replacement system needs to be fully accountable, anonymous, and not subject to any PERCEPTION of conflict of interest (which, as we all know, is often more important than actual conflict of interest).

The replacement system also needs to be cheap - the current peer review system has one major advantage - it uses lots of free labor from academics. I don't see the introduction of a paid review system as a win. Journals with paid editors (ahem, Nature) do not have the most stellar reputations when it comes to accepting good >> bad papers.

Permalink to Comment

2. pi* on November 5, 2010 9:04 AM writes...

Except -- it is impossible to get a grant proposal funded unless you have already shown that it works.
In theory you propose work. In practice you use your first communication as proof of concept.

Permalink to Comment

3. Curious Wavefunction on November 5, 2010 9:09 AM writes...

Quite true. The frequent gap between the expertise of the reviewers and those of the authors is especially a problem. In my opinion, a good group of reviewers would be one that combines experts from different fields. Then one could get both an insider and an outsider perspective on the proposal.

Harry Kroto also had some interesting thoughts on the system ("the most ludicrous system ever devised" in his opinion).

Permalink to Comment

4. Robert Kral on November 5, 2010 9:17 AM writes...

Having been through the SBIR mill, I can say that most proposals that get funded are proposals for work that has mostly been finished already. That's the only way you can satisfy the criticisms of most reviewers, whose suggestions cannot be met unless you already know the outcome. This is certainly true for NIH, much less so for USDA. Truly innovative research has a hard time getting traction in the NIH process, though there are exceptions (NINDS is much better in this respect than NCI, for example).

Permalink to Comment

5. p on November 5, 2010 9:56 AM writes...

There is also the odd bit that if your proposal is rejected, there is no way to appeal. I've had several papers initially rejected but where I could back to the editor and say, "Reviewer A is wrong and here is my evidence". The editors then look at it and decide whether I'm right or not. In two cases, they've decided I made a good point. Once it was sent out for further review (and, ultimately, accepted), once it was just accepted (one bad review out of three led to rejection - the bad review was based mostly on the fact that the reviewer had mis-analyzed the spectra).

In a similar case, I had a proposal declined based mostly on one poor review. I was able to show three completely incorrect assertions in the bad review. The PO told me I was right, but it was still my fault because I hadn't been more clear. So my proposal was declined based on an erroneous review. Annoying.

Permalink to Comment

6. CMCguy on November 5, 2010 10:02 AM writes...

This post elaborates a comment made on Where Drugs come from in stating "In some ways I see current NIH process as inhibitory in forcing incrementalization rather than promoter for seeking larger steps forward." I believe the peer review is part of that as if overly skeptical or critical the funding for possible larger leaps, which often require greater vision or luck, does not occur. Certainly if a reviewer sees something they know will not work then should indicate that and provide some support for the rejection (admittedly hard to do in blinded fashion). There is still no Journal of Failed Results which I think would save most of us a great deal of effort chasing doomed paths.

My sense is there is a bit too much "I am smarter than this person so the idea is no good mentality" that occurs in peer reviews. There need to be truly some portion of "unrestricted grants" so people are willing to take risks without damage form ideas not panning out. Of course as #2 Pi* suggests a fair number of grants are made on work that has already been done (which is exploiting the process?).

Permalink to Comment

7. dearieme on November 5, 2010 10:07 AM writes...

It would do public science a hell of a lot of good if part of the research funding were disbursed by drawing lots. Seriously. You'd have to decide how one qualifies to be a holder of a lottery ticket, of course.

Permalink to Comment

8. SP on November 5, 2010 11:09 AM writes...

The most frustrating rejection I've had is when a reviewer gave something a bad score because of a factual error- not about the science, but about the scope of the grant. Along the lines of, "This kind of grant doesn't support funding for this purchase" when in fact the RFA says that it does. No appeals process, though.

Permalink to Comment

9. partial agonist on November 5, 2010 11:11 AM writes...

The most annoying thing I have experienced so far in writing >10 proposals is that the scoring of your grant can sometimes be "adjusted" based upon "reviews" of people who didn't even read it.

An example: I submitted a grant application and all 3 of the reviewers were very positive in their remarks, even glowing. The final scores, while good (1.8 in the new system), were not as good as their reviews and was not fundable. They noted that upon a discussion with all members of the panel reviewing grants in that session there was considerable skepticism about the feasibility of the proposal. So... in other words the people who didn't read it but just heard an overview said that it wouldn't work, and convinced the people who DID read it to lower their scores.

I'm not sure how to make my proposal more appealing to people who haven't read it. This was a DOD grant and the funding cutoff was something like top 6%.

Permalink to Comment

10. DLIB on November 5, 2010 11:44 AM writes...

I think that the "investigator" score in the NIH process does a fair bit of harm. It is a simple measure of how many papers you've published. It is also likely to be defacto weighted more than the other criteria ( scores in this criteria will bunch together ). It seems to be assumed that the PI/author of the grant is not competent to pursue the research he /she is proposing. This criteria is a one of the driving forces behind stuffing your resume with papers -- money. It's bad for science and bad for innovation

Permalink to Comment

11. Note This on November 5, 2010 12:33 PM writes...

Virgil said- "The peer review system is, unfortunately, the best of a bad lot."

When the applicant(s) are anonymous and the research judged on its merits, only then will we reach such lofty heights.

Permalink to Comment

12. dvizard on November 5, 2010 7:23 PM writes...

While I (as many other people) have to agree to many criticisms of peer review, I rarely see the authors propose any viable alternatives. Criticism is always easy. But what do you want us to do? Randomly spew out grants to random researchers based on nothing at all? Observe the lab for a prolonged period and provide money to the researchers which show the most skill?

The problem is, in the end, that we simply don't have enough money to fund all good scientists in the world. Inevitably, not all good ideas will get their credit. If money was not limiting but good ideas were, then peer review would doubtlessly be a very efficient way to distinguish grant-worthy from not-so-grant-worthy research.

Permalink to Comment

13. dlib on November 5, 2010 11:19 PM writes...

@dvizard...there are suggestions in these posts. read them and choose if you think they are worthy to consider or not, but the suggestions abound. Most are tweaks. I see no problem with anonymous applications, do you? If the grant is considered fundable, but for some reason it was a man in Sing Sing prison ( once the grant writer is unmasked by the granting agency administrators )that wrote the application then the next most worthy fundable idea could be considered.

Permalink to Comment

14. p on November 6, 2010 7:49 AM writes...

I, too, have had scores adjusted based on comments of a panel, many of whom didn't read the proposal. For one, there are simply too many proposals and too few reviewers for everyone to read everything.

But the idea of a double-blind system is nice in theory - the PO could then evaluate if the PI could, indeed, carry out the project. But in practice, I'm not sure how you could read a proposal and not have a fair idea of who it came from. You'd need some mechanism to take out references to "our" work and you'd have to totally toss the previous results of funded grants section.

I think the "how" of how grants are awarded is flawed, but acceptable. But the big picture of the entire system promoting small steps and inhibiting leaps of imagination is a problem, long-term. If everyone is doing stuff they're sure will result in a cool paper or three within three years, we'll never get anywhere.

Permalink to Comment

15. Dimsum on November 6, 2010 5:53 PM writes...

I have seen Derek review papers from time to time. If the paper is by a friedn of his he desnt bother to even read it and lets it go through. However if its from who he thinks is not upto par he rejects it. Thats a "fair" peer review for you

Permalink to Comment

16. MIMD on November 7, 2010 12:06 PM writes...

Having both worked in academia and as a study section participant/grant proposal reviewer for a federal agency, he has some valid points.

I can add that the open debate that follows initial scoring help tap the expertise of many in the review panel to help root our bad proposals, and identify good ones.

Not perfect; for example, someone with a reason for throwing a monkey wrench into the works, such as fearing competition in a similar area of endeavor can do so.

Clear thinking and avoidance of logical fallacies in the debate is critical.

As to the reward-seeking behavior overshadowing actual performance of research, that might in part due to the pathology of academia whereby the best faculty "taxpayers" are treated preferentially. This is particularly true in the medical world.

Permalink to Comment

17. dvizard on November 7, 2010 12:10 PM writes...

@Anonymization: I'm not sure how much it would help. Well-known labs / PIs would be easy to recognize simply from the content of their research anyway. A smaller lab, which, say, had a failed project in a specific area, might want to hop research areas to conceal the source of the grant -> less consistency in research; people would try to disguise themselves as KC Nicolaou citing all his stuff in their grant proposals and applying for projects in "his" field -> more concentration on blockbuster/buzzword topics, even less out-of-the-box-thinking research, etc ad infinitum. I wouldn't be surprised at all.

Permalink to Comment

18. biff on November 8, 2010 3:38 AM writes...

Interesting how similar peer review is to the jury system, warts and all.

Permalink to Comment

19. Andy Pierce on November 8, 2010 6:32 PM writes...

But what do you want us to do? Randomly spew out grants to random researchers based on nothing at all? Observe the lab for a prolonged period and provide money to the researchers which show the most skill?

Kind of heretical, but for standard keep-your-lab-going renewable grants, how about paying for success achieved? It would be easy to come up with a list of things accomplished in the last defined number of years. Then the study section just has to figure out what it's worth in dollars, to be paid out over some future time interval. Pay for product instead of for promises and predictions.

Permalink to Comment

20. srp on November 9, 2010 6:52 AM writes...

I think that Gates or Buffet should fund a foundation that would provide a "saving roll" (for you D&D fans) for proposals that already went through the NIH or NSF mills. The key idea would be to fund projects that received a high variance in their ratings from the original government panels.

Stuff that everybody hates might be great but is probably not going to deliver anything. Stuff that everybody feels secure in rating high is likely to be incremental. The stuff that provokes disagreement seems likely to have a chance of being interesting whatever it finds out.

Permalink to Comment

21. hn on November 9, 2010 2:24 PM writes...

Something simple that could be done is to give more grants but each for less money. This would give greater leeway to the inconsistencies of grant review. Creative science is not correlated with expensive science.

Also, a separate pool of funds could be set aside for early career (not necessarily 1st time) scientists. Then there would be an easy number to debate and target.

Permalink to Comment

22. miracle ear akron on May 7, 2012 2:19 PM writes...

Good post and a nice summation of the problem. My only problem with the analysis is given that much of the population joined the chorus of deregulatory mythology, given vested interest is inclined toward perpetuation of the current system and given a lack of a popular cheerleader for your arguments, Im not seeing much in the way of change.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
The Smallest Drugs
Life Is Too Short For Some Journal Feeds
A New Look at Phenotypic Screening
Small Molecules - Really, Really Small
InterMune Bought
Citable Garbage
The Palbociclib Saga: Or Why We Need a Lot of Drug Companies
Why Not Bromine?