Nature has a comment on the quality of recent publications in clinical oncology. And it’s not a kind one:

Glenn Begley and Lee Ellis analyse the low number of cancer-research studies that have been converted into clinical success, and conclude that a major factor is the overall poor quality of published preclinical data. A warning sign, they say, should be the “shocking” number of research papers in the field for which the main findings could not be reproduced. To be clear, this is not fraud — and there can be legitimate technical reasons why basic research findings do not stand up in clinical work. But the overall impression the article leaves is of insufficient thoroughness in the way that too many researchers present their data.The finding resonates with a growing sense of unease among specialist editors on this journal, and not just in the field of oncology. Across the life sciences, handling corrections that have arisen from avoidable errors in manuscripts has become an uncomfortable part of the publishing process.

I think that this problem has been with us for quite a while, and that there are a few factors making it more noticeable: more journals to publish in, for one thing, and increased publication pressure, for another. And the online availability of papers makes it easier to compare publications and to call them up quickly; things don’t sit on the shelf in quite the way that they used to. But there’s no doubt that a lot of putatively interesting results in the literature are not real. To go along with that link, the Nature article itself referred to in that commentary has some more data:

Over the past decade, before pursuing a particular line of research, scientists. . .in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed ‘landmark’ studies. . . It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.Of course, the validation attempts may have failed because of technical differences or difficulties, despite efforts to ensure that this was not the case. Additional models were also used in the validation, because to drive a drug-development programme it is essential that findings are sufficiently robust and applicable beyond the one narrow experimental model that may have been enough for publication. To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors’ direction, occasionally even in the laboratory of the original investigator. These investigators were all competent, well-meaning scientists who truly wanted to make advances in cancer research.

So what leads to these things not working out? Often, it’s trying to run with a hypothesis, and taking things faster than they can be taken:

In studies for which findings could be reproduced, authors had paid close attention to controls, reagents, investigator bias and describing the complete data set. For results that could not be reproduced, however, data were not routinely analysed by investigators blinded to the experimental versus control groups. Investigators frequently presented the results of one experiment, such as a single Western-blot analysis. They sometimes said they presented specific experiments that supported their underlying hypothesis, but that were not reflective of the entire data set. . .

This can rise, on occasion, to the level of fraud, but it’s not fraud if you’re fooling yourself, too. Science is done by humans, and it’s always going to have a fair amount of slop in it. The same issue of Nature, as fate would have it has a good example of irreproducibility this week. Sanofi’s PARP inhibitor iniparib already wiped out in Phase III clinical trials not long ago, after having looked good in Phase II. It now looks as if the compound was (earlier reports notwithstanding) never much of a PARP1 inhibitor at all. (Since one of these papers is from Abbott, you can see that doubts had already arisen elsewhere in the industry).

That’s not the whole story with PARP – AstraZeneca had a real inhibitor, olaparib, fail on them recently, so there may well be a problem with the whole idea. But iniparib’s mechanism-of-action problems certainly didn’t help to clear anything up.

Begley and Ellis call for tightening up preclinical oncology research. There are plenty of cell experiments that will not support the claims made for them, for one thing, and we should stop pretending that they do. They also would like to see blinded protocols followed, even preclinically, to try to eliminate wishful thinking. That’s a tall order, but it doesn’t mean that we shouldn’t try.

Update: here’s more on the story. Try this quote:

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.“We went through the paper line by line, figure by figure,” said Begley. “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”

1 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *