About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« The 2013 Drug Approvals: Not So Great? | Main | Adoptive T-Cell Therapy for Cancer: The Short Version »

January 7, 2014

How Much Is Wrong?

Email This Entry

Posted by Derek

Here's another take, from Jeff Leek at Simply Statistics, on the "How much published research is false?" topic. This one is (deliberately) trying to cut down on the alarm bells and flashing red lights.

Note that the author is a statistician, and the the arguments made are from that perspective. For example, the Amgen paper on problems with reproducibility of drug target papers is quickly dismissed with the phrase "This is not a scientific paper" (because it has no data), and the locus classicus of the false-research-results topic, the Ioannidis paper in PLoS Medicine, is seen off with the comment that "The paper contains no real data, it is purely based on conjecture and simulation."

I'll agree that we don't need to start assuming that everything is junk, as far as the eye can see. But I'm not as sanguine as Leek is, I think. Semi-anecdotal reports like the Amgen paper, the Bayer/Schering paper, and even scuttlebutt from Bruce Booth and the like are not statistically vetted scientific reports, true. But the way that they're all pointing in the same direction is suggestive. And it's worth keeping in mind that all of these parties have an interest in the answer being the opposite of what they're finding - we'd all like for the literature reports of great new targets and breakthroughs to be true.

The one report where Leek is glad about the mathematical underpinnings is the Many Labs project. But there's something about that that bothers me. The Many Labs people were trying to replicate results in experimental psychology, and while there's probably some relevance to the replications problems in biology and chemistry, there are big differences, too. I worry that everything is getting lumped together as Science, and if this part of Science is holding up, then those worries that people in other parts of Science have are probably ill-founded (after all, they don't have any real numbers, right?)

Comments (11) + TrackBacks (0) | Category: The Scientific Literature


1. Jeff Leek on January 7, 2014 11:48 AM writes...

Thanks for linking to my post and the interesting discussion. I am definitely a statistician and certainly the post was written from that perspective.

I don't know how much science is right or how much it is wrong. I wrote that post because if we are going to make generalizations like saying, "most published research is false" then I think it is reasonable that we subject those generalizations to scientific scrutiny. My post was an obviously brief description of what I consider to be the major gaps in our knowledge about these sorts of statements.

It is important to be critical of these claims because they have clear political, funding, and scientific consequences. Before we make big, expensive changes to our system we should be sure we know why we are making those changes and what facts are driving the decisions.

There is clearly good science and clearly unreplicable science, but is rarely so clear cut as in the headlines.

Permalink to Comment

2. McDee on January 7, 2014 12:28 PM writes...

Derek projects some emotion to Dr. Leek which I don't think is part of his analysis. For example, Derek describes Dr. Leek as sanguine, but I don't think optimism is part of this underlying basis to his overview: "But the take home message is that there is currently no definitive evidence one way or another about whether most results are false." He is also characterized as "glad" about the Many Labs project. As he writes himself (twice) in these comments: "It is important to be critical of these claims because they have clear political, funding, and scientific consequences. Before we make big, expensive changes to our system we should be sure we know why we are making those changes and what facts are driving the decisions." This is an important motivation, free from emotion. To imply emotion on a statistician seems to me inherently mistaken!

Permalink to Comment

3. SP on January 7, 2014 1:51 PM writes...

"...all of these parties have an interest in the answer being the opposite of what they're finding - we'd all like for the literature reports of great new targets and breakthroughs to be true."
Are you sure about that? There's competition, there's incompetence that's easier to blame on others, there's (to link to your previous post) bias that everyone says academic reports are 50% garbage so of course ours didn't work.

Permalink to Comment

4. qetzal on January 7, 2014 2:26 PM writes...

I agree that we can't say definitively whether most results are false. But I also agree with Derek that a lot of results seem to be pointing in that direction.

Here's another: the failure rate for new drug candidates entering Phase III studies is around 50%! And IIRC, the large majority of those failures are due to efficacy &/or safety issues. (I.e. not that the Phase III met endpoints but the company dropped the drug for business reasons.)

These are typically drugs that have already shown some success in one or more Phase II studies. It seems likely that many of these Phase III failures are attributable to false positives in Phase II.

Permalink to Comment

5. johnnyboy on January 7, 2014 2:52 PM writes...

In reviewing papers for truthiness, some things are relatively easy to test (ie. statistical methods), some things are not - hence the difficulty in proving "scientifically" that a paper is incorrect. For instance, as a pathologist, I can state from experience that a large proportion of microphotographs of immunohistochemical stains published in peer-reviewed journal is essentially garbage, and therefore the quality of the data that the photographs intend to illustrate is highly suspect. However because journal reviewers rarely have the knowledge and experience required to evaluate such photographs, the garbage goes through very easily. I suspect the same is true for many others aspects of published work - two reviewers can't be expected to have all the knowledge to critically evaluate every one of the many research techniques used in life science research. However proving this "scientifically" is well nigh impossible because of the depth of the work that would be necessary to demonstrate it.

Permalink to Comment

6. Cellbio on January 7, 2014 7:56 PM writes...

I agree with Johnnyboy and would add a certain perspective. I think it is a bit difficult to prove a published result is false, because one has to thoroughly evaluate all variables in contrast to "showing" the potential of something to cure what ails ya. But this task of proving the work is wrong is extremely doable. It was done by the Amgen group, and I have done this several times myself, on two cancer compounds, one very sexy at the time, and in several other cases where the published results would not be demonstrably wrong without a thorough repeat in the labs. In one case, work demonstrating the error was submitted for publication, with an editorial rejection based on corrections being out of scope. This from the journal that published the misleading, but sexy story. So, Jeff, hope informal is OK, I do not agree that one needs to be critical of the claims, but rather more critical of the primary literature. The publication game is steeped in self interest and controlled in a manner that reflects that. Heck, just look at citation frequency in several fields and correlate the degree to which authors self cite vs. cite their rivals they compete against. Do that, and I'd love to read your results.

For me, as Derek says, my self interests were totally aligned with getting the same results and launching an exciting program that would feed my career. When, as part of a larger team, you see not a few but many many of these examples, the primary problem clearly lies with the current system not the reaction to it. Higher quality is all I want.

Permalink to Comment

7. Paul Brookes on January 8, 2014 8:45 AM writes...

Another example of this type of exercise, which has not received as much press in the recent binge of reproducibility publicity, is the NIH sponsored CAESAR project (

The aim is to solve the problem that many small molecules look very good at preventing injury in animal models of myocardial infarction, but none of that has translated to therapies in humans. There have been some spectacular clinical trial failures along the way, with the result that most big pharma companies abandoned their MI programs a few years ago. After 50 years of this type of research there are still no FDA-approved infarct-limiting therapies, and yet every month someone else comes along with another molecule that ameliorates cardiac ischemic injury in mouse/rat/rabbit/dog/pig (I'm as guilty as anyone else on that front, having proposed quite a few in my time).

The set up is pretty neat - 3 groups, geographically spread out, with shared animal models between them (mouse/rabbit/pig), standard baseline procedures, everything double blinded and placebo controlled, blinded data analysis - all the things you expect from a quality clinical trial but at the animal pre-clinical level.

There are some issues, such as people submitting compounds to the consortium being worried about IP coverage, and of course having to trust your "pet" compound to a group of people who may be your academic competitors, but the group seems to have done a good job so far assuaging those fears. They haven't published much yet, but word on the street is there have been some early shocks, with things that looked very promising and supported by a lot of basic science papers just not panning out when subjected to this higher level of scrutiny.

A similar structure might therefore be useful in some other areas that have generated a lot of failure-to-translate in recent years (e.g. Alzheimer's). The current model, wherein small pharmas are founded and out-license IP on the basis of data from a single lab, just doesn't seem to be working.

Permalink to Comment

8. SP on January 8, 2014 9:29 AM writes...

"It was done by the Amgen group"
See, the problem is we have no way to evaluate this statement because the article is just a claim that they only reproduced 6 papers, as Jeff says we have no data to question why they failed to reproduce the others. Now maybe that's a failure of the journal system not accepting debunking papers, but publishing a commentary without even saying which reports were failures doesn't help anyone except maybe the authors of the commentary who now get to be highly cited without actually publishing any data.

Permalink to Comment

9. Cellbio on January 8, 2014 12:36 PM writes...

You make a valid point SP, but as someone who has generated very clear and convincing evidence that prior papers are wrong, the telling of this story is very complex. The journals have no incentive to publish corrections as they sell the hype and promise of the original results, and in no way is it easy to point out that the work of very talented and proud researchers whose reputations are their strongest currency are wrong. Additionally, the reputations and careers of students/post-docs are tied to the work. To ignore this complexity and think we can expect it to be easy to tell these stories, or that without "scientific" demonstration of the failings we must call into question the legitimacy of concerns is off base.

To provide some insight, in one case, work of a highly regarded scientist that got a lot of public press "showed" a protein was a strong anti-cancer compound. The model was run correctly, the results valid...for what was tested. The senior researcher had no experience in recombinant protein expression. Their "purified" protein was a smear of proteins from the well to the bottom of a gel by SDS PAGE. In SEC, the "protein" was an aggregate that was not in solution. The source was e coli, so loaded with endotoxin. With proper expression, purification and analytical efforts we show the protein has absolutely no impact. In total, a real embarrassment of an effort that we shared with the primary researchers who continued in their start-up company to proceed into clinical trials.

In another, a protein is "shown' to be a breast cancer target because antibodies to it block cell growth in vitro. Obtaining samples, the first thing that is found is there is no antibody in the prep. The second thing we do is make more antibody from hybridomas and show there is no binding to the antigen. We then make new abs because maybe the expression fo the right ab was lost, and get 7 abs to 3 different epitopes and produce them, qualify them and show definitively that there is no effect. Another story that is a whole lot easier to bury than to say through exhaustive data that the first publication was crap, especially as a new grant submission was being reviewed (and subsequently funded).

I guess where I end is to share observations to help others be aware that there is a significant problem rather than think we need to carefully quantify the problem or verify claims of individuals who point out their undeniable experience that leads them to conclude there is a problem.

Permalink to Comment

10. biologist on January 8, 2014 1:23 PM writes...

Maybe the word "result" needs better definition, this may be the point where Leek and the pharma scientists differ?
I believe that the results from the n-th incremental paper in The Journal of X are probably true. Revolutionary results in a trendy journal? Not so sure. It's in the trendy journal where the incentives favor hype.

If there are five different papers published within a year in trendy and not so trendy journals amd having the same results, one can also be fairly confident that the results are true. Same if there was one paper that produced many follow-up papers from a variety of labs over the next 20 years.

But that's a high price for confidence to pay, either in research $$ or in time! Of course, it's the revolutionay papers that are interesting for the pharmaceutical industry, and therefore the selection that the Amgen scientists did.

Permalink to Comment

11. Anon on January 14, 2014 12:37 AM writes...

"The paper contains no real data, it is purely based on conjecture and simulation."

So were Einstein's paper on Relativity.

Permalink to Comment


Remember Me?


Email this entry to:

Your email address:

Message (optional):

A Last Summer Day Off
The Early FDA
Drug Repurposing
The Smallest Drugs
Life Is Too Short For Some Journal Feeds
A New Look at Phenotypic Screening
Small Molecules - Really, Really Small
InterMune Bought