Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Off the Beaten Track. Way, Way, Off. | Main | Pfizer Layoffs Today »

April 28, 2011

Just A Few More Month's Work, That's All I'm Asking Here

Email This Entry

Posted by Derek

Here's the cry of someone who's been jerked around by too many journal referee reports. Hidde Ploegh of the Whitehead Institute has a piece in Nature News called "End the Wasteful Tyranny of Reviewer Experiments". That could have just possibly have been phrased more diplomatically, but I know what he's talking about.

Too often, reviewers try to show that they're fulfilling their responsibilities by requesting additional work from the authors of a paper under consideration. This happens more and more as you move up the hierarchy of journals, as both the novelty of the work and the incentive to publish it increase. No one's going to exert themselves too much to get their paper into Acta Retracta, even if some rogue reviewer were to try it, but Science and Nature (among others) can really make you perform tricks.

What this reminds me of is a story about Steve Wozniak, of Apple fame. When he was in college, his dorm had an old TV down in the lobby with a rabbit-ear antenna, which had to be messed with constantly to get a good picture. Woz apparently built a gizmo to fuzz out the reception, and used to sit inconspicuously in the back of the room, trying to see what sort of crazy positions he could twist people into as they held the antenna in what was seemingly the One Perfect Spot.

The referee equivalent is Just One More Experiment, and it's not always justified:

Submit a biomedical-research paper to Nature or other high-profile journals, and a common recommendation often comes back from referees: perform additional experiments. Although such extra work can provide important support for the results being presented, all too frequently it represents instead an entirely new phase of the project, or does not extend the reach of what is reported. It is often expensive and unnecessary, and slows the pace of research to a crawl. Among scientists in my field, there is growing concern that escalating demands by reviewers for the top journals, combined with the increasingly managerial role assigned to editors, now represents a serious flaw in the process of peer review.

Ploegh's point is that too many referees aren't reviewing the paper that they have; they're suggesting a whole new project or phase of research. And some of these wouldn't even affect the results and conclusions of the paper under review very much - they're just "Gosh, wouldn't it be nice if you would also. . ." experiments. The benefit for science, he says, is nowhere near commensurate with the disadvantage of holding up publication, messing with the career prospects of younger investigators, spending extra time and grant money, and so on. His suggestion?

The scientific community should rethink how manuscripts are reviewed. Referees should be instructed to assess the work in front of them, not what they think should be the next phase of the project. They should provide unimpeachable arguments that, where appropriate, demonstrate the study's lack of novelty or probable impact, or that lay bare flawed logic or unwarranted conclusions.

He also suggests that reviewers provide an estimate of the time and cost involved for their suggested experiments, and compare that to their purported benefits. I wouldn't mind seeing editors crack down on this some, either. I've had useful feedback with my own manuscripts, which had identified things that really did need to be shored up. But submitting a paper should not routinely be an exercise in having other people tell you what experiments you should run before you can publish your. When there really is a gap or flaw, naturally, it's appropriate to ask for more, but I agree with Ploegh that a reviewer needs to make a case for such things, rather than just asking for them as a matter of routine.

Ploegh has a larger historical point to make as well. Looking back at the earlier days of , say, molecular biology, you get the impression that if someone sent in an interesting paper that seemed reasonable, it would just get published, without all these trips back to the bench. Somehow, the mechanics of science (and especially scientific publication) have changed. Has it been for the better? Or would we all be better off letting more things through as they stand, if they're clearly presented and logically consistent?

I wonder if journals might consider publishing in this style, while then adding an editorial note about what further experiments had been suggested by reviewers. This would fulfill the function of pointing out potential weak points or areas for further exploration, but without delaying things so much. I don't see this happening - but why not, exactly?

Comments (17) + TrackBacks (0) | Category: The Scientific Literature


COMMENTS

1. koolkat on April 28, 2011 7:31 AM writes...

Dropped a close italics somewhere

Permalink to Comment

2. David P on April 28, 2011 8:17 AM writes...

Can't the author argue to the editor that the extra work is beyond the scope of the current paper? After all, the reviewer comments are not commandmants, it is still down to the editor if it goes in the journal or not.

Permalink to Comment

3. John Spevacek on April 28, 2011 9:50 AM writes...

Glad to see that I'm already reviewing at the suggested level. Sad to say that I haven't made it up the the Science and Nature level yet - still stuck at the RSC group.

Permalink to Comment

4. Hap on April 28, 2011 9:56 AM writes...

I guess since I haven't published and generally see/pay attention to the more egregious mistakes, I don't see too many reviewer-requested experiments being the problem. For example, it seems like both the "natural" nevirapine paper and the arsenic-based life paper could have either used more experiments or a paring of their claims.

This might be placed under the heading "reviewer competence", but since there's not much benefit in improving reviewing to the reviewer, it's hard to see how it's going to happen, exactly.

Permalink to Comment

5. Anonymous on April 28, 2011 10:31 AM writes...

I would sometimes say, that a flat out rejection is better then can you fix this POS with X,Y, Z.

So many times people submit work that is really subpar, but people don't have the guts to call it like it is.

The submitting people know this, but are betting that they will get asked to do 1-2 things and boom publication.

You can game the system both ways.

Permalink to Comment

6. mdw on April 28, 2011 10:32 AM writes...

Much of this stems from the "I have to say something" syndrome that crops up when a paper is basically pretty thorough and well-done. It's ok to say that a manuscript is just fine, thank you.

I spike probably 80% of manuscripts I review, often savagely, and some of the rest require clarification or small modifications, but occasionally I'll just tell the editor to go ahead and publish. Is that so wrong?

Permalink to Comment

7. Sundowner on April 28, 2011 11:32 AM writes...

In my humble opinion, my work as referee is not suggest improvements or experiments to be done. My work is to write down a report on why the manuscript should be (or not) published, and in the negative case giving the reasons.

So I agree with the people above. If the manuscript is OK, the report is quite short: "It is OK, correct some minor mistakes, go ahead." If the manuscript is OK, usually because it is a worthless piece of work, I say so.

However, I wonder the editor has the last word... and I wonder how the decision is made. The old Space Shuttles had five computers to check subsystems. In the case of a 2-2 result, the last computer had the veto. But seeing what it is published these days (specially in JACS, JOC and other supposedly top level journals), I think the reasons are a) political b) they need to fill the space.

Let's be honest, there are by far too many chemistry journals. And we could live without most of the papers being published.

Permalink to Comment

8. pete on April 28, 2011 12:05 PM writes...

I think it comes down to a reviewer judging the claims of the paper vs. the complexity of the experimental system. So if a paper is very good but has claims that overstep the data, then it's a question of: 1) How much of a PITA is it for these folks to go back a do that extra experiment(s) that will support the claims ? Alternatively: 2) How much of a PITA is it for these guys to tone down their claims ? With choice (1) it could mean risking major delay/$$$/etc. as argued by Ploegh, especially if data derives from a system that's technically complicated or 'limited'. With choice (2), the authors may have to tone down claims to the point where the paper's SEX-quotient fall's below the threshold for publication in my glossy journal. What's a reviewer to do ? Would it be more helpful to the author to just say "NO, COME BACK TOMORROW WITH MORE IN YR. BASKET" ?

Permalink to Comment

9. Karl Hallowell on April 29, 2011 1:02 AM writes...

I'm not sure I get the problem, perhaps being from a different field (math side of mathematical physics). I published a few times and just haven't experienced this problem at a level to really complain about.

My limited experience is that if you get a reviewer with excessive demands, you can just do the work (benefit exceeding the cost perhaps, such as can happen for a prestigious journal), work around the reviewer (sometimes even get a replacement reviewer), or go to a journal that doesn't have such demands (which sometimes can be the same journal as before, but with a different drawing of reviewers to trouble you).

Going on, if there really was some staggering demand for excessive work. I'd first argue that the proposed research, ingenious though it is, would fall outside of the scope of the current paper. Then I'd try to get a different reviewer, based on the claim that while the reviewer has great, wonderful ideas, their proposed revisions are unfortunately irrelevant to the content of the paper, could I get another opinion? At that point, if I can't get around the reviewer, I guess the choice is suck it up or move on.

Journals have to deal with the possibility of human motivations interfering with the supposed impartiality of the reviewer too. Someone asking for more work could be diligently doing their job on an inadequate paper. Or they could be delaying work so that their paper gets published first or the author's work (perhaps of the whole research group that the author is attached to) is disrupted. That's a nice feature of the Arxiv preprint server for physics, math, and CS research. It easily and fairly establishes temporal claims which helps reduce some of the nastier motivations for sabotaging fellow researchers.

Permalink to Comment

10. MoMo on April 29, 2011 1:00 PM writes...

The problem my F. M. Hallowell is that us chemists have to go back into the lab and do experiments with cancer-causing solvents and reagents because some Big Head Ego Scientists thinks his ideas are sacrosanct.

You mathemeticians only suffer from paper cuts and dirty hands from No. 2 pencils

Permalink to Comment

11. Chris Elliott on April 29, 2011 2:29 PM writes...

I like this report in Nature, but Peter Lawrence puts it even more strongly: The Heart of Research is Sick (http://www.lab-times.org/labtimes/issues/lt2011/lt02/lt_2011_02_24_31.pdf). When I started as a PhD student, Nature or PNAS were full of short papers explaining some new discovery briefly with a typical figure, and the same material was published in full elsewhere for the specialist audience. Now we hope to publish in Nature with whole rafts of supplementary material and scarcely enough words to explain our data. Power to J neursoci, for saying it will no longer take supplementary material.

chris

Permalink to Comment

12. Karl Hallowell on April 30, 2011 10:08 AM writes...

The problem my F. M. Hallowell

MoMo, isn't the honorific "F. M." an insulting reference to some part of a young girl's body? Maybe that's not appropriate for a blog where people don't fling monkey poo at each other?

...is that us chemists have to go back into the lab and do experiments with cancer-causing solvents and reagents because some Big Head Ego Scientists thinks his ideas are sacrosanct.

I pointed out several ways that the Big Head Ego could be mitigated or bypassed. They should work pretty much anywhere. And if you can't, then you still have the choice of doing the work or going to another journal.

You mathemeticians only suffer from paper cuts and dirty hands from No. 2 pencils

Yet it is one of the hardest things you can do because the human brain isn't optimized for most of mathematics. It's rather easy to get into situations that have no semblance to anything in the physical world and hence, for which we have poor intuition at best.

Don't get me wrong, I'd rather have the paper cut problem than grind through repetitive, painstaking work. As I see it, any existing problem or unresolved question, whether mathematical or chemical, is hard because otherwise it wouldn't be there.

Permalink to Comment

13. MoMo on April 30, 2011 6:42 PM writes...

Excuse me Hallowell, as I see we are one and the same. You are a kindred spirit in another realm. Be progressive H, I dig it.

Permalink to Comment

14. MoMo on April 30, 2011 6:43 PM writes...

Excuse me Hallowell, as I see we are one and the same. You are a kindred spirit in another realm. Be progressive H, I dig it.

Permalink to Comment

15. Hasufin on May 2, 2011 10:06 AM writes...

I think it's a disease of humanity, that any documentation process must be expanded and embellished until it is rendered useless. Then, it is either replaced or an entire stage is broken off.

In telecommunications, now, the process for creating a proper standard is so cumbersome that often the actual standards come out well after the relevant technology is obsolete; today, a "proposed" standard is usually treated as if it were a complete standard.

In government, a great many contracts undergo the expedited process (which takes a mere 4-8 months) rather than the full fulfillment process which can take years.

It seems as if this is occurring here, with the referee taking on a greater role but increasing the time to publish, and in due time a workaround will be found to reduce this time yet again... and the cycle shall repeat.

Permalink to Comment

16. Hasufin on May 2, 2011 10:18 AM writes...

I think it's a disease of humanity, that any documentation process must be expanded and embellished until it is rendered useless. Then, it is either replaced or an entire stage is broken off.

In telecommunications, now, the process for creating a proper standard is so cumbersome that often the actual standards come out well after the relevant technology is obsolete; today, a "proposed" standard is usually treated as if it were a complete standard.

In government, a great many contracts undergo the expedited process (which takes a mere 4-8 months) rather than the full fulfillment process which can take years.

It seems as if this is occurring here, with the referee taking on a greater role but increasing the time to publish, and in due time a workaround will be found to reduce this time yet again... and the cycle shall repeat.

Permalink to Comment

17. trrll on May 2, 2011 4:21 PM writes...

I recently received a review from a moderately high-profile journal that basically boils down to, "I'm not interested in the paper that you actually wrote, but I might be interested in what you come up with if you invest another year or two into following it up." For a high-profile journal, I suppose that this is defensible; at least the reviewer taking the time to let you know what he thinks would be exciting enough for that particular journal (although I don't think that this particular review gave me any ideas that I hadn't thought of myself). But I've gotten similar reviews from middle-of-the-pack journals.

It does sometimes seem like reviewers sometimes ask for more stuff because they need to feel like they are doing their jobs and are too embarrassed to send in a review without substantive criticisms. It is a bit of a cheap shot, since no matter how much has been done or how strong the conclusions, it is nearly always possible to think of something more that could be added. And there are often practical constraints, as when the follow-up work was done by a different experimenter and you don't want to push either of them into the "second with an asterisk" position. On the other hand, it is perfectly legitimate--and helpful--if the reviewer is saying, "I don't quite believe your conclusions, but here is an experiment that might convince me."

It's hard for me to think of a rule that would exclude the former case but allow the latter.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
The Worst Seminar
Conference in Basel
Messed-Up Clinical Studies: A First-Hand Report
Pharma and Ebola
Lilly Steps In for AstraZeneca's Secretase Inhibitor
Update on Alnylam (And the Direction of Things to Come)
There Must Have Been Multiple Chances to Catch This
Weirdly, Tramadol Is Not a Natural Product After All