About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« NASA's Arsenic Bacteria: A Call For Follow-Up Experiments | Main | The Escondido House of Explosives Goes Up - Live »

December 9, 2010

So What's Going On With Peer Review, Anyway?

Email This Entry

Posted by Derek

I have a larger comment, sparked by the controversy over the NASA-arsenic-bacteria paper in Science. But it's not just about that one. It's about the "reactome" paper (also in Science and now retracted), the hexacyclinol synthesis published in Ang. Chem., and others. There have been, I think it's fair to say, a number of very arguable papers published in very high-profile journals in recent years. What's going on?

I want to make it clear that I'm not upset about journals published "out-there" work. In fact, I wish that there were a bit more of it. But at the same time, if you're going to go out there on the edge, you'd better have some solid stuff to report when you come back and write up the paper. Extraordinary claims really do require extraordinary evidence, and that's where things seem to be breaking down.

Peer review is supposed to catch these things. That reactome paper had chemists rolling their eyes as soon as they saw the synthetic schemes in it, and asking if anyone at the journal had thought to call someone who knew organic chemistry during the review process. This latest arsenic paper has other specialists upset, for different reasons (and, to be sure, for reasons that don't require much scientific specialization at all, as detailed in my post after I'd given the paper a close reading). But that hexacyclinol paper appeared in a chemistry journal, and had (one assumes!) been reviewed by competent chemists. How, then, could it have been published to immediate howls of derision about the quality of the evidence in it?

I also want to make clear that I'm not talking about some of the other categories of bad papers, such as the things are are probably true, but of little interest to anyone. And in the probably-not-true category, lower-ranking journals let not-so-good stuff through pretty often. I've been hard on Biorganic and Medicinal Chemistry Letters here before, among other journals, for publishing things that appear to have been incompetently reviewed. But these journals aren't Science or Nature, and the whole point of prestigious journals is that the things that appear in them are supposed to be important, and they're also supposed to be thoroughly vetted.

Is it the push to land the big papers that will make a big splash? Does that cause people in the editorial offices to bend the rules a bit? The official answer from every journal editor that's ever lived to such questions has been "Of course not!", but you have to wonder. Is it a problem with how they're assigning papers for review - who they go to, or how seriously the reviews are taken when they come back? I really don't know. I just know that we seem to be seeing a lot of embarrassing stuff in the literature these days. It's not supposed to work that way.

Comments (40) + TrackBacks (0) | Category: The Scientific Literature


1. CR on December 9, 2010 12:38 PM writes...

You are correct to be hard on BMCL (or any journal, for that matter) when they do something ridiculous. It doesn't matter whether you are talking about a paper in Science or Nature or BMCL - the reviewers have a responsibility to actually review the article, and the editors have a responsibility to make sure everything is fine.

Permalink to Comment

2. Hap on December 9, 2010 1:04 PM writes...

Is it possible/likely that there aren't more howlers in top-flight journals but rather that there are caught out more publicly? Lots of chemistry blogs started because the Sames-Sezen incident was so poorly and secretly dealt with and couldn't apparently be discussed reasonably or openly. Could other bad papers have been published, but the inability to connect sets of people with appropriate skepticism and knowledge limited their exposure to an audience and thus prevented their incompetence/dishonesty from being known?

(The Cornforth flaying of Charterjee's alkaloid synthesis and Hudlicky, Paquette, and et al flaying him over triquinanes are sort of counterexamples, particularly when the prof who taught one of my grad classes called the disappearance/reappearance of a methyl group "a Chatterjee demethylation/methylation.)

Permalink to Comment

3. metalate on December 9, 2010 1:06 PM writes...

It don't matter who's made mistakes with peer review, Angewandte is still the king.
A new retraction:
DOI: 10.1002/anie.201090154

These folks already had a correction admitting that some of the structures are bogus:
DOI: 10.1002/anie.201090014
It took another 11 months to figure out that everything is wrong.

Permalink to Comment

4. passionlessDrone on December 9, 2010 1:12 PM writes...

Hello friends -

I'd love to see a data visualization on the ratio of news conferences / media embargos versus questionable findings and or retractions.

The 'longevity genes' study in Science was one that really sticks in my mind in this regard.

- pD

Permalink to Comment

5. reader on December 9, 2010 1:35 PM writes...

Do you think the process of peer review is influenced by conflict of interest and personal connections between scientists? to what extent could this lead to retraction of reports in long term?

Permalink to Comment

6. reader on December 9, 2010 1:52 PM writes...

Well,it is probably the side effect of peer review that we should live with till someone invent better system.

Permalink to Comment

7. p on December 9, 2010 1:57 PM writes...

A lot is because reviewers are really busy. Reviewers should really dig deep on every paper but it's working for free, on a deadline. You shouldn't expect much quality when you do a rush job on the cheap.

It would, I think, be better to pretty much publish anything the editors think looks reasonable and then open up a comments dialog on each paper. I think a lot of the "shock" comes from people who have naive ideas of what peer-review really means. It involves people not putting in much effort, having personal or professional conflicts with the authors or simply not being close enough to the area to do a good review.

Permalink to Comment

8. Hap on December 9, 2010 1:59 PM writes...

But you need to know what's wrong if you intend to make it better, no? Bemoaning the problems of peer review doesn't mean that you want to get rid of it - you may want to fix them instead. Only when the problems are inherent to peer review (and then you have to decide whether the problems are tolerable) does it become a choice between peer review and something else (that doesn't exist yet).

Permalink to Comment

9. Curt F. on December 9, 2010 2:16 PM writes...

The problem with peer review is that the reviewers are anonymous and not accountable to anyone. Thus, if we're holding journals' feet to the fire, the proper headline in my view would be "what's going on with journal editing, anyway?"

Journal editors pick peer reviewers, and are responsible for evaluating the quality of reviews and making final decisions on publication. Not reviewers.

I'm sure many authors have had the experience of being disappointed by editors' propensity to blindly follow what reviewers say, even when the quality of the reviews is obviously poor and easily refuted. Obviously as authors we're biased to the cases where editors refuse to disregard poor reviews, but I bet the case of editors putting too much faith in obviously poor positive reviews also happens a lot.

In short, the editors are the ones who let in all these questionable papers, not the reviewers, so the egg is on their faces.

Permalink to Comment

10. CR on December 9, 2010 2:48 PM writes...

@ p"
"A lot is because reviewers are really busy. Reviewers should really dig deep on every paper but it's working for free, on a deadline. You shouldn't expect much quality when you do a rush job on the cheap."

Absolutely, unequivocally no excuse. Everyone is busy...if you are too busy to review, then decline. Scientists rely on the peer-review system and if you expect to publish then you have to reciprocate.

We already have a system you describe - it's called publish and let the blog world form an opinion. Completely anonymous responses does no one any good.

Permalink to Comment

11. CMCguy on December 9, 2010 2:58 PM writes...

I think #7 p is on track that a significant flaw in peer review is that relies on "volunteers" who are either often too busy and/or even unwilling to admit they are not really knowledgeable enough to adequately do what should be done. There may be COI/Buddyism that complicates the process although would seem factors cancel each other out. Peer review now seems to play more a cursory role than a system to truly vet the value and challenge the conclusions vs data offered. Maybe Journals and/or NIH could hire a cadre of Post-docs as supplements to process who would be able to focus on doing a more thorough review/editing of papers before sent out to final reviewers.

I still pin responsibility on Authors who should conduct greater diligence up front by discussions with area colleagues plus involve screening from relevant disciplines before they are willing to submit for publications. The lack of trust inherent with present circumstances inhibits such an approach as no one whats to share anything until it is published.

Permalink to Comment

12. reader on December 9, 2010 3:17 PM writes...

I agree with 11. May be peer review should become a professional career, a payed job. Especially there are thousands of higly skilled underpaid postdocs who are experts in their research areas and unable to secure real jobs these days to raise families. May be juornals should pay them to get the job done professionally. The cost of publishing an article is quit high so there should be funds to pay for the review process.

Permalink to Comment

13. MTK on December 9, 2010 3:29 PM writes...

Hey, I'll admit to writing a bad review or two in my life. Generally out of haste more than anything else and it gets back to the whole non-pay scheme.

I do it, because it's a responsibility to the scientific community. At the same time I gotta put food on the table, so I'll try to be as "efficient" as possible. On several occassions, after submitting the review, I think back and say "Oh, I misunderstood that" or "Why didn't I suggest that experiment?" It bugs me to no end, but at the same time I guess that's why there should be multiple reviews and that's why editors exist.

However, let's say reviewers are paid and it's explicit that good reviewers are more likely to get articles to review in the future. in that case, I'd probably make sure I did a better job. Right now, reviewing papers is more like jury duty. Something you feel compelled to do with very little tangible reward for doing a good job vs. a bad job.

Permalink to Comment

14. partial agonist on December 9, 2010 3:58 PM writes...

I review a lot of papers for BMCL and JACS, read every one 2-3 times, check some of the references, poke around the literature, and provide detailed comments. I spend at least an hour on even a short communication.

As far as I know, nobody else does this and people tell me I'm crazy.

Permalink to Comment

15. p on December 9, 2010 4:10 PM writes...

I agree that reviewing is a duty and professional obligation and should be taken seriously and that there really isn't an excuse for doing a sloppy job. But there are reasons and they're pretty much inescapable. I try to do like partial agonist (and usually do) but I'll confess to having, at times, come up against a wall. I have an exam to write, my own paper to get out the door and a grant to submit. Not to mention two theses that need revising and a university committee report to the person most directly involved in judging my next raise.

Where on this list do you suppose reviewing a manuscript for free will fall? I have both declined to review and sent less than thorough reviews. I get yelled at by the editor for the former, thanked for the latter.

Like I said earlier, in what other field would you consider a rush job done for free to result in excellent outcomes?

I don't really know how you'd solve this. I don't see anyway you could pay people enough to make this a lucrative enough gig. They could probably expand the number of reviewers - my sense is "higher-tier" journals will only ask for reviews from folks who publish in their journals. Those folks are, on average, going to be busier than most.

Finally, I'm not sure what the big fuss is. A crappy paper got published and pretty much immediately outed as a crappy paper. The only problem is if you hope to be able to judge a paper by the journal it's in. If you want to think, anytime you see a Science paper (or JACS or Angewandte, etc.) that it must be a "good" paper then the arsenic saga is bad news. But if that is how you feel, you're already naive and/or deluded. No one should ever accept a paper's worth until they've had a thorough read and "reviewed" it themselves.

Permalink to Comment

16. CR on December 9, 2010 4:20 PM writes...

@15, p:

" own paper to get out the door..."

And what is your expectation for the review of that paper? Do you also expect a crappy review that might in fact cause the paper to be declined because someone didn't put any effort into it?

I understand the "I'm busy" excuse, but for someone that also wants to publish, then one must make time. Otherwise, stop publishing.

Permalink to Comment

17. Hap on December 9, 2010 4:31 PM writes...

No one has the time to read all of the literature - thus they depend on journals (and their editors) to make judgements on what's important and not. If a paper is good and important, people in the future will likely know, but they won't necessarily know if a paper generates results that aren't consistent or reproducible. If a journal makes enough bad judgments on what you put in, people not only don't trust what it publishes but what also wonder it didn't publish as well (and what articles that weren't hyped are not so good). Eventually, people stop caring what it publishes, which means that people have to figure out another new pecking order - not the end of the world, but uncertain and time-consuming.

If journals are unable to sort good research from sloppy, bad or undersupported research, then you have a problem, since, as said, no one can read everything. Throwing everything into a pile and hoping it gets filed accordingly is like lowering taxes and hoping the national debt will go away, somehow.

Permalink to Comment

18. MTK on December 9, 2010 4:39 PM writes...


It's not that you don't make time, you do. You just don't do the best job you could all the time.

The other thing is that the whole journal publishing business is a scam. What other publishing business do you know where the content is obtained free, the review of said free content is free, and most actual publishing is e-publishing meaning minimal mailing or publishing costs.

There's a reason why there are so many journals out there being pushed by all the publishers. I assure you it's not because of all the publication worthy research being conducted.

The quality of papers submitted, reviewed, and eventually published would certainly go up if the money saved by e-publishing would be used to pay reviewers rather than create more journals.

Permalink to Comment

19. CR on December 9, 2010 4:48 PM writes...

Not everyone, by their own admission, makes the time. The "not doing the best job all the time" is understandable and unavoidable. I'm not arguing that that happens. It's the argument that one doesn't make the time, or it's such a low priority because of all the other responsibilities - yet they still expect to publish.

It may not be fun, and it sometimes is a chore, but it one wants publish and have their work reviewed fairly, then it has be taken seriously on both ends.

I don't fully agree with the "paid reviewer" argument. One could easily envision a scenario where there is pressure from the journal to the paid reviewers to "accept" a certain number of manuscripts to make a quota for subscription fees. Here, at least, the reviewers have no ties to the journal and can be unbiased.

Permalink to Comment

20. johnnyboy on December 9, 2010 5:04 PM writes...

Good discussion on a fraught topic. There's one thing I'd like to add: I think reviews would be much more fair if done without knowing the paper's authors. In small fields, papers can be sent for review to people from 'rival' labs, which may have a vested interest in being overly critical. On the other hand, papers authored by the most prominent researchers in a field may get a free pass from a reviewer who does not consider himself qualified enough to question the high and mighty. I've never understood why reviewers were kept anonymous, but not the authors.

Permalink to Comment

21. Curt F. on December 9, 2010 5:12 PM writes...

Since reviewers are anonymous, they are not accountable to anyone. Maybe we can add some accountability somehow to the process?

1. The editors know who the reviewers are. Maybe editors should get in the habit of sending personalized notes of appreciation for well-done reviews, and make sure to cc the relevant dept. chair or head of the tenure committee. Likewise, tenure committees or dept. chairs should consider these letters when giving out promotions, etc.

2. Maybe journals should make all the reviews they receive, whether for accepted papers or not, available to the public. The reviewer's identity could stay anonymous. That way, editors would be incentivized to solicit reviews from even-tempered, reasonable people and reduce their reliance on paper-thin emotion-laden reviews, whether positive or negative.

3. Maybe we should put a lifetime quota on how many papers an individual scientist is allowed to publish. This isn't my idea, it's Nature Publishing Group's:

Permalink to Comment

22. Curt F. on December 9, 2010 5:33 PM writes...

One more crazy idea:

Universities should found journals and guarantee that submissions would be reviewed by their faculty. The Harvard Journal of Chemistry, would have manuscripts reviewed by Harvard chemistry profs, for example, and published articles would have the imprimatur of the Harvard brand -- all manuscripts therein would be "Harvard approved". Submissions to this journal from within Harvard would be disallowed.

University endowments would fund the journal and the articles would be freely available to the public. What a worthy way for Harvard to advance their stated mission to "create knowledge" and "open the minds of students to that knowledge"! Success for the journal means that Harvard's value as a brand increases, and because of the institutional interest in the journal's high quality, employees of Harvard (i.e. the faculty) would be incentivized to work hard at their reviews.

Plus, if lots of schools started doing this their libraries could save tons of money on subscription fees.

This idea isn't too crazy, because it's essentially the model used in legal academe, except that the reviewers at law reviews are students, not faculty.

Permalink to Comment

23. reader on December 9, 2010 6:22 PM writes...

I agree with number 20. Authors and institute names where the paper is coming from should become anonymous. I saw situations where the PI of a rival lab intentionally delays certain papers to give a chance to his student to catch up. I see a situation like this could be partially resolved if the name of the authors and institutes remain anonymous.

Permalink to Comment

24. Steve on December 9, 2010 6:33 PM writes...

I know as a reviewer I am reviewing 10 x more papers than I submit (publish maybe 10 a year, review 60 or more), and as soon as I submit a paper to a new journal I will get asked to review papers from them. I try and do the best I can but there are limits. In one particularly bad case (I suspected fraud but had to really make sure) I lost a whole day on the paper.

Permalink to Comment

25. p on December 9, 2010 8:11 PM writes...

"I understand the "I'm busy" excuse, but for someone that also wants to publish, then one must make time. Otherwise, stop publishing."

Fair enough. But, I'm not trying to excuse or defend any "bad" reviews I may have submitted in the past(I don't consciously know of a poor effort but I'm quite aware that I've often reviewed papers when I've been distracted and hurried). I'm trying to address why lots of bad reviews turn up and what might be done to correct it. It's fine that you always do an exceptional job reviewing - you should be commended for that; clearly, most reviewers aren't so committed or disciplined. Simply asking people to be better rarely solves anything.

Permalink to Comment

26. Surprised on December 9, 2010 8:17 PM writes...

#24 - Steve,

10 a year! I don't see how you have time for In the Pipeline!

Permalink to Comment

27. Annette Bak on December 9, 2010 8:33 PM writes...

The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability — not the validity — of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.

Permalink to Comment

28. cliffintokyo on December 9, 2010 10:04 PM writes...

Probably, but suggest something better?

Double blind peer review is a great idea.

Another idea: Get many more chemists in industry involved. Make sure pharma companies agree it is a professional duty.
I used to review before I moved away from active science, (and RSC started sending me only obscure papers by *foreigners* after I rejected a big name paper as too *ordinary* for ChemComm).

My 2 cents
Organic chemistry should be straightforward, but is time-consuming:
1) Check the MS, H-D/C/2D-nmr, ir, UV, mp, HPLC, ElAn, etc data for all claimed structures, and compare to similar established lit. compounds.
2) Check exptl procs and solvents, reagents, etc for obvious boobies
3) Check key cited lit refs, and for plagiarism
4) Reject totally any paper that does not measure up. (Caveat: can probably afford time to review only 2 papers/year on this scrupulous basis)
5) Reject outright any paper not written in good English; not worth any effort. If the authors don't care enough about their science to get help to write it up properly, they should receive no consideration whatsoever.

There, that should eliminate 90% of already published org chem papers!

Don't really know how to handle characterisation issues for bioorg/biotech/protein chemistry...

Permalink to Comment

29. sam on December 10, 2010 1:32 AM writes...

I kinda doubt that peer review is any worse than its ever been. There's just more internet conversation about the bad papers now!

Here's another example of a paper that shouldn't have made it past the reviewers:

Permalink to Comment

30. Spiny Norman on December 10, 2010 3:55 AM writes...

A huge factor here is that the people who edit Science, Nature and Cell are — at best — former scientists, in most cases ones who have never themselves run a research group or had to take senior-author (as opposed to first-author) responsibility for the contents of a manuscript. Put bluntly, these people in many cases know a hell of a lot less about what they're doing than they think that they do, and yet they are in positions of tremendous power in the scientific community.

Permalink to Comment

31. carras on December 10, 2010 6:12 AM writes...

I am not in the same category, by far, than Science, Nature ,or in my own field, Analytical Chemistry, but I’ve done some reviewing for serious journals. Last time I was somewhat surprised on hearing from the editor that a paper I had marked as “needs major revision” was going to be published anyway (apparently without any revision). It wasn’t outright rubbish but it sure needed to explain and clarify some of its procedures, findings and claims. And I sure hope someone did rewrite the whole thing up, English usage and spelling was pretty much, er…, “unconventional”.

Permalink to Comment

32. Crystallographer on December 10, 2010 6:17 AM writes...

One of the potential reasons for a certain amount of bias in reviews, in my opinion, is in the fact that when you submit a paper, in quite a lot of journals, you are asked to suggest reviewers! As such, a PI will often end up suggesting their "friends" within the field. If all of those suggested get on with the PI, then, even if not all of them are selected to review, a certain amount of bias can come in. Why has this come about? In part, I suspect, due to the fact that so many people don't have enough time to review, so even if they accept, they can still end up putting things off for far longer than they ought. By way of example, a short communication that I published in 2007 too two and a half months to get any response on, as the journal was waiting on one review to sort themselves out and get something in. And by my guess, that reviewer, when they did provide something, just dashed it off in a hurry - it was all of seven lines long.

#28 - your point 5. While it would certainly on the face of it would improve things to bounce papers with poor English, where do you draw the line? If it is with grammatically correct English, you'd likely end up bouncing 99+% of papers, even those written by native English speakers (even if you allowed the general use of passive voice), simply because people very often don't write in a technically grammatically correct manner - I know I don't! The other issue it would cause is that a lot of the papers from groups in non-English countries would simply go into own-language journals (a lot more of which would likely show up). If even a top paper from a top group is getting bounced for a little poor English, why would they bother to take the extra time and effort, when they can publish in a more local journal, which will likely be reasonably well ranked (after a while) especially locally, and easily readable to their collaborators and colleagues!

#29 - while I don't know specifically about the journals you mention, the point does not hold for a lot of other journals. Most of the journals I have published in (as far as I know!) have current academics as Editors/Senior Editors. Perhaps the time commitment necessary for stuff like this is a part of the reduction in available time for reviewing papers...?

Permalink to Comment

33. Lester Freamon on December 10, 2010 9:33 AM writes...

Sometimes, I think the problem is related to the over-specialization of disciplines we're seeing today. Part of what I think happened during peer review of the Arsenic paper is that they sent it to some microbiologists that don't know much about either trace metal contamination or synchotrons or analytical chemistry, who said "hmm, seems unlikely, but it grows in +As/-P, so if the chemistry I don't understand holds up, it's okay by me"; then they sent it to some synchotron experts (the only group to support the paper during this controversy) who said, "yep, there is Arsenic in an ester form, there is an enrichment in +As/-P over -As/+P..." and allowed it to pass even though they couldn't evaluate the actual biochemistry of what is going on.

We need more generalists instead of specialists--we need people that can evaluate ALL the data in a paper as a whole and see if it makes a plausible story--figure 1 and figure 3 are intricately connected, you can't simply divide up a paper like that.

Permalink to Comment

34. jtd7 on December 10, 2010 10:27 AM writes...

I think there are additional problem with research papers in high-profile journals. Usually they do not present a complete piece of work. To raise the profile of the paper, it has to hit too many points to support any one of them adequately. The journals conceal this by banishing the Experimental Methods to “Supplemental Information Available On-Line.” How many readers trouble to look them up? I have subscribed to Science for over thirty years (since grad school). I used to read it to get an overview of research outside my area of specialization (protein biophysics, biochemistry and cell biology), and to learn about novel experimental methods. Nowadays I see an interesting result, ask myself “How did they do that?” and turn to the footnotes only to see “Supplemental Information Available On-Line.” Usually I read a hard-copy because that’s when I’m relaxed and receptive to new ideas. Rarely do I get up and turn on the computer to learn more. If I do so, often I find that the Supplemental Information is sketchy and inadequate.

Do these problems affect the quality of peer-review in a high-profile journal? I imagine they might. If the reviewer knows that the manuscript is essentially incomplete anyway, he or she might let questionable points slide.

Permalink to Comment

35. Tok on December 10, 2010 12:12 PM writes...

The assumption here seems to be that peer review is broken. Is it? What is the failure rate? What fraction of Science or Nature papers are bad? What kind of accuracy should we expect?

Permalink to Comment

36. metaphysician on December 11, 2010 10:20 AM writes...


Is that necessarily feasible, though? If the work requires an expert who has dedicated years of effort into a single specialized field, then you need a specialist.

Permalink to Comment

37. cliffintokyo on December 13, 2010 4:24 AM writes...

#32 Crystallog.

Glad you agree that 90+% of papers should not have been published ;-)

English: I was not talking about occasional grammatical errors; I meant the manuscripts submitted with many technical words mis-spelled, and/or illogical/incomprehensible sentences.
We can all agree on these, I think.

*Foreigners*: Feel free to publish in your own country/ own language; nobody will read your papers/ consider your grant proposals/ accept your patent applications/ etc.

Permalink to Comment

38. Donough on December 13, 2010 6:14 AM writes...

Before one can review something,one must ask if they are reviewing the correct thing.

When I look at a paper (engineering field) I see abstract (specifically lines that have valves to digest), pictures, maybe discussion if the pictures look interesting and conclusion (in my case there are invariable pictures showing the performance of the particular invention in a mature field).

Do I need a big massive introduction talking about this research being important for climate change or describing the area of research etc; no. Do I need a detailed setup description again no etc.

In essence there is much padding in the papers submitted today (I suspect 20-50%) that I will only read in 1/10 of the papers.

So I think that a refinement in the way a paper is written is first needed. Papers should not be writing a story so in essence, there is no need for a start, middle and end (intro, description/results conclusions). In fact I would tend to put results first and describe the experiments later for papers dealing with a new invention in a known area (i.e. process to test this new invention is well known e.g. filtration).

Permalink to Comment

39. X-Files on December 13, 2010 1:26 PM writes...

As one of the contributor said, reviewing is an unpaid job. I personally review a lot of papers every year and I try my best to be as thorough as possible. Some reviews did not do a good job because they are too busy. Rather then telling the editors that they do not have time to provide their service, these reviewers simply say yes because they want to stay on the good side of the editors. Worse of all some "reviewers" hand over the manuscripts to either graduate students or postdoctoral workers to conduct the reviewing. These students or postdocs may not have sufficient chemical knowledge to pass a fair judgement. I know for a fact that one of the academic staff in my department regularly does that. The may explain why sometimes mediocre or bad papers appear in good journals. Imagine that you hire a big professor as your consultant, will you be happy that the professional opinion is casted by his/her student instead?

Permalink to Comment

40. Jeremy on December 14, 2010 4:00 AM writes...

I think Derek's post hints at an important issue for 'big' journals. These are not 'journals of record' that publish middling results - these are magazines whose business plan involves publishing eye-catching results that will attract readers (and citations). They're in some form of circulation war. So however careful they are (or think they are), the editors will be bound to be biased towards manuscripts that are eye-catching, indeed so eye-catching that they are in fact plain wrong. Qualms from referees will be ignored, referee reports sought from senior people in the field who will be encouraged to look at the big picture (the eye-catching-ness) and not the boring details (the things that are wrong), etc. But as some other commenters have pointed out - as long as it gets caught, it does not matter too much. Though it would be interesting to try to work out how many papers that are wrong and high profile do not get caught, and why.

Permalink to Comment


Remember Me?


Email this entry to:

Your email address:

Message (optional):

The Last Post
The GSK Layoffs Continue, By Proxy
The Move is Nigh
Another Alzheimer's IPO
Cutbacks at C&E News
Sanofi Pays to Get Back Into Oncology
An Irresponsible Statement About Curing Cancer
Oliver Sacks on Turning Back to Chemistry