Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Pfizer: Making the Case for a Breakup | Main | Grad School Opportunity Costs? Not to Worry! »

March 29, 2012

Sloppy Science

Email This Entry

Posted by Derek

Nature has a comment on the quality of recent publications in clinical oncology. And it's not a kind one:

Glenn Begley and Lee Ellis analyse the low number of cancer-research studies that have been converted into clinical success, and conclude that a major factor is the overall poor quality of published preclinical data. A warning sign, they say, should be the “shocking” number of research papers in the field for which the main findings could not be reproduced. To be clear, this is not fraud — and there can be legitimate technical reasons why basic research findings do not stand up in clinical work. But the overall impression the article leaves is of insufficient thoroughness in the way that too many researchers present their data.

The finding resonates with a growing sense of unease among specialist editors on this journal, and not just in the field of oncology. Across the life sciences, handling corrections that have arisen from avoidable errors in manuscripts has become an uncomfortable part of the publishing process.

I think that this problem has been with us for quite a while, and that there are a few factors making it more noticeable: more journals to publish in, for one thing, and increased publication pressure, for another. And the online availability of papers makes it easier to compare publications and to call them up quickly; things don't sit on the shelf in quite the way that they used to. But there's no doubt that a lot of putatively interesting results in the literature are not real. To go along with that link, the Nature article itself referred to in that commentary has some more data:

Over the past decade, before pursuing a particular line of research, scientists. . .in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed 'landmark' studies. . . It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.

Of course, the validation attempts may have failed because of technical differences or difficulties, despite efforts to ensure that this was not the case. Additional models were also used in the validation, because to drive a drug-development programme it is essential that findings are sufficiently robust and applicable beyond the one narrow experimental model that may have been enough for publication. To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors' direction, occasionally even in the laboratory of the original investigator. These investigators were all competent, well-meaning scientists who truly wanted to make advances in cancer research.

So what leads to these things not working out? Often, it's trying to run with a hypothesis, and taking things faster than they can be taken:

In studies for which findings could be reproduced, authors had paid close attention to controls, reagents, investigator bias and describing the complete data set. For results that could not be reproduced, however, data were not routinely analysed by investigators blinded to the experimental versus control groups. Investigators frequently presented the results of one experiment, such as a single Western-blot analysis. They sometimes said they presented specific experiments that supported their underlying hypothesis, but that were not reflective of the entire data set. . .

This can rise, on occasion, to the level of fraud, but it's not fraud if you're fooling yourself, too. Science is done by humans, and it's always going to have a fair amount of slop in it. The same issue of Nature, as fate would have it has a good example of irreproducibility this week. Sanofi's PARP inhibitor iniparib already wiped out in Phase III clinical trials not long ago, after having looked good in Phase II. It now looks as if the compound was (earlier reports notwithstanding) never much of a PARP1 inhibitor at all. (Since one of these papers is from Abbott, you can see that doubts had already arisen elsewhere in the industry).

That's not the whole story with PARP - AstraZeneca had a real inhibitor, olaparib, fail on them recently, so there may well be a problem with the whole idea. But iniparib's mechanism-of-action problems certainly didn't help to clear anything up.

Begley and Ellis call for tightening up preclinical oncology research. There are plenty of cell experiments that will not support the claims made for them, for one thing, and we should stop pretending that they do. They also would like to see blinded protocols followed, even preclinically, to try to eliminate wishful thinking. That's a tall order, but it doesn't mean that we shouldn't try.

Update: here's more on the story. Try this quote:

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.

"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

Comments (38) + TrackBacks (0) | Category: Cancer | Drug Assays | The Scientific Literature


COMMENTS

1. MIMD on March 29, 2012 8:12 AM writes...

Then there's sloppy science coming from our own government Dept. of HHS.

How's this for a caveat?

“Our findings must be qualified by two important limitations: the question of publication bias, and the fact that we implicitly gave equal weight to all studies regardless of study design or sample size.”

Permalink to Comment

2. Rick Wobbe on March 29, 2012 8:23 AM writes...

In addition to increased publication pressure, there's also increased pressure to turn research and tech transfer offices into profit centers, so licensing early and often is the order of the day. In that caveat emptor world, the highest value is making the most money - and as such the model has worked spectacularly (greater than average licensing revenue for less than average scientific rigor/cost), leading one to ask "so where's the problem?" Sometimes it seems like, to paraphrase Barry Goldwater, "Extremism in the defense of revenue is no vice".

Permalink to Comment

3. Tech transfer on March 29, 2012 9:35 AM writes...

Published research doesn't have to be right, it just has to convince others it might be right. It is rewarded with more funding, the rest is obvious.........publish convincing stuff, real or not.

Permalink to Comment

4. Student on March 29, 2012 9:49 AM writes...

One topic Lee has brought up in seminars (he is our chair) is cell-cell contamination of cell lines. Both by the clinicians bring them from the clinic and by those in the hood (who passage them tens of times and don't think twice about handing them off to the lab down the hall, street, etc.)

Permalink to Comment

5. Virgil on March 29, 2012 10:09 AM writes...

Spent most of last night reading this issue of nature, and wondering when it would show up here! Nice to see this stuff in the main journal instead of relegated to Nat. Rev. Drug Dev.

As for reproducibility in cancer studies in the lab, one only has to look at the impending debacle at MD Anderson (http://md-anderson-cc.blogspot.com), to see what a mess basic research in cancer field is in, especially cell biology studies.

Permalink to Comment

6. Student on March 29, 2012 10:34 AM writes...

@5 I don't think that guy's lab reflects the field or the institution as he (or his workers?) was outright lying (as opposed to being sloppy like this article speaks to)...As grad students we are required to take ethics courses. Among them, one emphasizes the use of software to determine image plagerism/manipulation/etc. So this really took a lot of people by suprise.

Permalink to Comment

7. lynn on March 29, 2012 10:38 AM writes...

It's certainly not limited to oncology. I think it's up to all of us who review manuscripts to insist on seeing the right controls run [both positive and negative] and much more rigorous testing of hypotheses. I agree it's not fraud - but a lot of sloppy science.

Permalink to Comment

8. Rick Wobbe on March 29, 2012 10:41 AM writes...

Tech transfer, #3,
I can't see your face from here. Were you winking and smiling mischievously when you wrote that or were you serious? Sarcasm doesn't transmit very clearly through electronic media.

Permalink to Comment

9. lazybratsche on March 29, 2012 10:45 AM writes...

"I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

That is very very disheartening, and really inexcusable. I've only been at this for a few years (as a tech and now a grad student) but I would never ever believe an experiment that only worked once, and then failed in the next five replicates. Sure, sometimes it's hard to do an experiment "correctly" since there are so many variables, some of which can't be controlled. So sometimes it's worth trying to find the conditions that make the result reproducible. But that non-reproducible result is worthless and should never be published. The result that is only reproducible under narrow conditions can be informative, but it's only publishable those conditions are made absolutely explicit.

And yet, the lab who publishes shocking nonreproducible results will get a handful of Nature papers. The lab who is more cautious will be lucky to get a single solid paper in a less glamorous journal.

Permalink to Comment

10. Matthew Herper on March 29, 2012 10:52 AM writes...

Over on twitter, Leonid Kruglyak of Princeton pointed out that saying "94% of what you know is wrong but we won't tell you which 94%" does not exactly establish credibility either.

Permalink to Comment

11. dearieme on March 29, 2012 11:38 AM writes...

"He said they'd done it six times and got this result once, but put it in the paper because it made the best story." Sorry, that is fraud - he knew all along.

Permalink to Comment

12. maverickny on March 29, 2012 11:51 AM writes...

Sloppiness is, of course, only one potential reason for the lack of reproducibility, but there are many others including contaminated cell lines, as others have correctly pointed out.

However, the sheer complexity of the biologic processes involved in cancer are also important and one that I'm surprised a leading cancer researcher such as Lee Ellis didn't bother to point out.

Permalink to Comment

13. Anonymous on March 29, 2012 11:51 AM writes...

I'm a chemist, and I've always been suspicious of medical types - I don't know how many times I've seen some half scientist / half P.T. Barnum on CNN breathlessly claiming that a cure for cancer was right around the corner. I can guarantee you I'd be tarred and feathered if I tried to publish a paper saying I'd invented a time machine or something like that!

Permalink to Comment

14. NJBiologist on March 29, 2012 11:52 AM writes...

@7 Lynn--Absolutely. And that's not limited to oncology, or to target validation studies. Unfortunately, opinions about what procedural controls mean vary between individuals. Blinding, in particular, means very different things to different people.

Permalink to Comment

15. RM on March 29, 2012 12:07 PM writes...

What really gobsmacked me was this little gem from the Reuters story on the article:

"Some authors required the Amgen scientists sign a confidentiality agreement barring them from disclosing data at odds with the original findings."

Which, if true, is just wrong. Not even wrong, as the saying goes. That's not how science should be done.

If I was in their situation, I'd seriously consider refusing to sign the agreement, and then going to the journal editors with it as de facto evidence that the original article should be retracted, as the authors obviously have no confidence in its validity.

Permalink to Comment

16. Ginsberg on March 29, 2012 12:21 PM writes...

"He said they'd done it six times and got this result once, but put it in the paper because it made the best story."

"Very disillusioning" or scientific misconduct?

Permalink to Comment

17. jtd7 on March 29, 2012 12:38 PM writes...

Another contributing factor may be the de-emphasis of Experimental Methods. Especially in high-profile journals such as Nature and Science, the methods are relegated to “Supplemental Material Available On-Line.” I think this sends a message that they are not all that important. Too often, when I am trying to follow up on a published result, I find that the published methods are inadequate. The source and characterization of a primary antibody may not be given. The method may be a reference to another publication that, too often, does not describe any such method.

Permalink to Comment

18. John Wayne on March 29, 2012 12:42 PM writes...

@16, definitely scientific misconduct. It may be time (or a little past time) to make a few examples out of poor behavior; this sort of thing has the potential to infect the whole field of scientific research.

Permalink to Comment

19. mike on March 29, 2012 2:07 PM writes...

To be fair, a lot of experiments in the hands of graduate students and undergraduates do not work many times before they actually work. Without more detail, I would hesitate to say that the experiment working one time in six was a failure to replicate the work, instead of the PI handing the project to student after student until one of them did it. Then he jumped to the conclusion that this student did it right, rather than that the experiment didn't work.

It's hard to publish an article that really describes all the failed experiments in a graduate lab. "Researcher A got the reaction to work on his fourth attempt, but then was able to do it consistently for the next two attempts before he left. Researcher B, an undergraduate, claims to have run the reaction six times, but the rest of the lab only remembers him to be there on two of those days, and for one of them he left it running over spring break before attempting to work it up. Researcher C got it to work once, and then failed six more times before realizing that when his reagent was borrowed by the lab down the hall, they left it out of the dry box for three weeks before returning it. He ran it once more with fresh reagent and it worked again, but since he was getting married and leaving grad school he never wrote it up."

So how many times did the reaction work? Which of the reactions should be placed in the experimental section? And this would be a chemical system, which is so much shorter and easier to interpret than a biological one.

Permalink to Comment

20. anonymous on March 29, 2012 3:13 PM writes...

a

Permalink to Comment

21. Todd on March 29, 2012 3:14 PM writes...

I'm shocked, but not surprised. This has been going on for decades. I don't think there's a lot of fraud, like everyone else is saying. However, if you're a PhD or postdoc who needs a paper to graduate, and the PI is leaning on you, well...things can be made to work the 53rd time. Also, there's more secret sauce in your average academic lab than in Most McDonald's. It's scary how often that happens.

Permalink to Comment

22. Hap on March 29, 2012 3:23 PM writes...

I thought if you actually published a paper on a synthetic method, that you were actually supposed to have one - meaning that you know the basic inputs needed to obtain the depicted products (for a limited subset of reactants) consistently. If you don't have that, why was your paper published again (other than for CV enhancement)? Otherwise, I might as well be reading ads for nutraceuticals and methods to attain financial freedom through lottery tickets than reading research papers.

Permalink to Comment

23. Andrew on March 29, 2012 4:27 PM writes...

I found it more than a little hypocritical that the Nature paper had no methods, no results, no list of what they tried and failed to validate, no indication of how they tried to validate etc etc. In short it was one of the sloppiest papers I've seen, and itself belonged in the Journal of Irreproducible Results. No wonder it appeared in Nature

Permalink to Comment

24. Chrispy on March 29, 2012 5:27 PM writes...

Gee, it is unfortunate that Begley did not encourage his group to publish their results. Amgen clearly applied a lot more resources to these studies than the academics could afford to, and now this boatload of important research was done only to be lost. Part of the beauty of Science is that it is OK to disagree, but show us your evidence. Whining that the academics are doing an inadequate job is not really participating and doesn't help much. Until leaders in industry recognize that they bear responsibility for scientific progress, too, we'll be stuck, each company trying to secretly discover what is real and what isn't on their own.

Permalink to Comment

25. Anonymous on March 29, 2012 8:41 PM writes...

And for those who don't read the literature and end up "re-inventing the wheel" it's called SLOPPY SECONDS!!! LOL

Permalink to Comment

26. Anonymous on March 29, 2012 9:51 PM writes...

@24. The results are being published. Here's an example http://cancerres.aacrjournals.org/content/early/2011/07/07/0008-5472.CAN-11-0778

Permalink to Comment

27. Iridium on March 30, 2012 1:26 AM writes...

"He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

If result comes out 1 every 6 time, it could be still ok to publish it... as long as youy say it works 1 time out of 6!

In my book, this is not very disillusioning, this is fraud.
Sadly...he even does not realize that!

Permalink to Comment

28. Rob on March 30, 2012 7:24 AM writes...

That is thoroughly disgusting conduct.

What ticks me off as an academic scientist is having to compete for grants with people who just make it up.

Permalink to Comment

29. Rob on March 30, 2012 7:25 AM writes...

That is thoroughly disgusting conduct.

What ticks me off as an academic scientist is having to compete for grants with people who just make it up.

Permalink to Comment

30. Anonymous on March 30, 2012 7:27 AM writes...

... and how many biologists experiments did really contain the intended cell lines? Watch this:

http://www.researchgate.net/publication/51919690_The_necessity_of_identity_assessment_of_animal_intestinal_cell_lines_A_case_report

Permalink to Comment

31. newnickname on March 30, 2012 8:44 AM writes...

In his book "The Way of Synthesis: Evolution of Design and Methods for Natural Products" and elsewhere, Hudlicky says that one of the major problems in modern chemistry is "ethics" (or the lack of ethics?) in the reporting of yields and other aspects of our reactions and research. There should be no shame or penalty for reporting a "59%" yield instead of accidentally-on-purpose transposing that to "95%" (which I have witnessed others do).

Permalink to Comment

32. Jordan on March 30, 2012 11:57 AM writes...

@31 newnickname: Hudlicky makes the same point very vigorously in person.

@21 Todd: I think you've identified a potential root cause here -- the "race to the finish line". It may manifest itself more as sloppiness than outright fraud, but the net effect is the same.

Permalink to Comment

33. Immunoldoc on March 30, 2012 2:23 PM writes...

Having been involved in target ID and validation group for a major pharma I can say, charitably, that about 50% of published work was not reproducible either in whole or in part. In terms of the accusations that pharma is "hiding" such data, show me the journal that routinely publishes negative results that refute findings of major labs and I'm quite sure I'd be happy to get these stories out. I tried on several occasions to publish well-controlled, negative data, refuting a major story, in the very journals that had printed the original article only to be told they weren't interested.

Permalink to Comment

34. Nile on March 31, 2012 5:16 AM writes...

"He said they'd done it six times and got this
result once, but put it in the paper because it
made the best story."

...Follow-up studies tried fifty times, got zilch, and the original experimenter admitted he couldn't help them.

Isn't there a journal specifically for irreproducible results?

Permalink to Comment

35. Eric Schuur on April 3, 2012 12:35 PM writes...

Better to get the truth out there in the open, but I do find that last quote quite depressing.

Permalink to Comment

36. Mario on April 6, 2012 8:55 AM writes...

There are people doing research/academy for the prestige/fame, not for the love for science. For them science is a tool to reach a goal, not the goal itself. So, faking results is not that disturbing just a necessary little stain to move one step up.

It is not sloppy science, it is a crime.

The sufferer is waiting for a cure.

Permalink to Comment

37. CM on April 6, 2012 9:26 AM writes...

Scientists are turned into mortgage slaves nowadays -- what else did you expect?

Permalink to Comment

38. udippel on April 6, 2012 2:12 PM writes...

Of course, it is fraud. Don't kid nor delude yourselves.
Delusion is, when someone calls 'sloppy' what is outright fake. Science is, according to some definition, that an experiment can be repeated. If it can't, it is fake. As simple as that. If "sometimes" renders a result publishable, we have slipped down to the same level that we used to abhor. Then, astrology, palm reading, homeopathy and whatnot is just as much science as 'our' science. "We just happen to not be able to reproduce our result in these days."
Though it is understandable. It is not 'us' per se, but the society around us. Instead of trusting us to be responsible academics, we are controlled by bean-counters who demand a 'breakthrough' after some fixed time of funding. As if science wasn't that most of the time it just doesn't work. It is career, tenure, feeding the family. Though that ought not be an excuse. The unemployed family father's stealing from the store isn't condoned neither.

Until a few days ago I was working in a place where we were 'encouraged' to let 'slip' through any student with non-functional results of research. Reason given: it's for the income of the university ("your salary").

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
One and Done
The Latest Protein-Protein Compounds
Professor Fukuyama's Solvent Peaks
Novartis Gets Out of RNAi
Total Synthesis in Flow
Sweet Reason Lands On Its Face
More on the Science Chemogenomic Signatures Paper
Biology Maybe Right, Chemistry Ridiculously Wrong