Over at Ars Technica, here's an excellent look at the peer review process, which I last spoke about here. The author, Chris Lee, rightly points out that we ask it to do several different things, and it's not equally good at all of them.
His biggest problem is with the evaluation of research proposals for grants, and that has indeed been a problem for many years. Reviewing a paper, where you have to evaluate things that other people have done, can be hard enough. But evaluating what people hope to be able to do is much harder:
. . .Reviewers are asked to evaluate proposed methods, but, given that the authors themselves don't yet know if the methodology will work as described, how objective can they be? Unless the authors are totally incompetent and are proposing to use a method that is known not to work in the area they wish to use it, the reviewer cannot know what will happen.
As usual, there is no guarantee that the reviewer is more of an expert in the area than the authors. In fact, it's more often the case that they're not, so whose judgement should be trusted? There is just no way to tell a good researcher combined with incompetent peer review from an incompetent researcher and good peer review.
Reviewers are also asked to judge the significance of the proposed research. But wait—if peer review fails to consistently identify papers that are of significance when the results are in, what chance does it have of identifying significant contributions that haven't yet been made? Yeah, get out your dice. . .
And as he goes on to point out, the consequences of getting a grant proposal reviewed poorly are much worse than the ones from getting a paper's review messed up. These are both immediate (for the researcher involved) and systemic:
There is also a more insidious problem associated with peer review of grant applications. The evaluation of grant proposals is a reward-and-punishment system, but it doesn't systematically reward good proposals or good researchers, and it doesn't systematically reject bad proposals or punish poor researchers. Despite this, researchers are wont to treat it as if it was systematic and invest more time seeking the rewards than they do in performing active research, which is ostensibly where their talents lie.
Effectively, in trying to be objective and screen for the very best proposals, we waste a lot of time and fail to screen out bad proposals. This leads to a lot cynicism and, although I am often accused of being cynical, I don't believe it is a healthy attitude in research.
I fortunately haven't ever had to deal with this process, having spent my scientific career in industry, but we have our own problems with figuring out which projects to advance and why. Anyone who's interested in peer review, though, should know about the issues that Lee is bringing up. Well worth a read.