Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« 19 Years to a Retraction. Bonus Midnight Camera Footage Included. | Main | A Microwave Argument »

July 9, 2013

Non-Reproducible Science: A Survey

Email This Entry

Posted by Derek

The topic of scientific reproducibility has come up around here before, as it deserves to. The literature is not always reliable, and it's unreliable for a lot of different reasons. Here's a new paper in PLOS ONE surveying academic scientists for their own experiences:

To examine a microcosm of the academic experience with data reproducibility, we surveyed the faculty and trainees at MD Anderson Cancer Center using an anonymous computerized questionnaire; we sought to ascertain the frequency and potential causes of non-reproducible data. We found that ~50% of respondents had experienced at least one episode of the inability to reproduce published data; many who pursued this issue with the original authors were never able to identify the reason for the lack of reproducibility; some were even met with a less than “collegial” interaction.

Yeah, I'll bet they were. It turns out that about half the authors who had been contacted about problems with a published paper responded "negatively or indifferently", according to the survey respondents. As to how these things make it into the literature in the first place, I don't think that anyone will be surprised by this part:

Our survey also provides insight regarding the pressure to publish in order to maintain a current position or to promote ones scientific career. Almost one third of all trainees felt pressure to prove a mentor's hypothesis even when data did not support it. This is an unfortunate dilemma, as not proving a hypothesis could be misinterpreted by the mentor as not knowing how to perform scientific experiments. Furthermore, many of these trainees are visiting scientists from outside the US who rely on their trainee positions to maintain visa status that affect themselves and their families in our country.

And some of these visiting scientists, it should be noted, come from backgrounds in authority-centered and/or shame-based cultures, where going to the boss with the news that his or her big idea didn't work is not a very appealing option. It's not for anyone, naturally, but it's especially hard if you feel that you're contradicting the head of the lab and bringing shame on yourself in the process.

As for what to do about all this, the various calls for more details in papers and better reviewing are hard to complain about. But while I think that those would help, I don't see them completely solving the problem. This is a problem of human nature; as long as science is done by humans, we're going to have sloppy work all the way up to outright cheating. What we need to do is find ways to make it harder to cheat, and less rewarding - that will at least slow it down a bit.

There will always be car thieves, too, but we don't have to make it easy for them, either. Some of our publishing practices, though, are the equivalent of habitually walking away with the doors unlocked and the keys in the ignition. Rewarding academic scientists (at all levels) so directly for the number of their publications is one of the big ones. Letting big exciting results through without good statistical foundations is another.

In this vein, a reader sends along the news that the Reproducibility Initiative is now offering grants for attempts to check big results in the literature. That's the way to get it done, and I'm glad to see some money forthcoming. This effort is concentrating on experimental psychology, which is appropriate, given that the field has had some recent scandals (follow-up here) and is now in a big dispute over the reproducibility of even its honestly-meant data. They need all the help they can get over there - but I'll be glad to see some of this done over here in the biomedical field, too.

Comments (16) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature


COMMENTS

1. Paul Smith on July 9, 2013 9:40 AM writes...

Just to add a few cents:

Some research efforts are also simply inherently not easily reproduced. In the protein field, two identical proteins expressed and purified using identical protocols can and often do behave very differently. No sloppiness involved or malice intended, just normal erratic protein behavior. A lot of stochastic variables go into the dark arts of protein purification and crystallization - your mileage may vary.

--Paul

Permalink to Comment

2. neandrothal on July 9, 2013 10:18 AM writes...

Um...how does one "prove a hypothesis"? If my mentor asked me to "prove a hypothesis" I'd suspect that she didn't know what statistics and/or the scientific method were.

Permalink to Comment

3. The Iron Chemist on July 9, 2013 10:20 AM writes...

Better reviewing is also needed. As a referee, I've come across more than a couple of submissions that have been plagiarized. With some of these, I've found the exact same phrases used in multiple other articles, not just one. Also, I'm sure that everyone who reads this blog has come across astonishingly crummy never-should-have-been-accepted articles in their favorite journal.

Of course, referees are doing this for free. Some sort of reward for a job well done might improve things, but I'd be hard-pressed to suggest what exactly that might be or how the merit of a review would be determined.

Permalink to Comment

4. Canageek on July 9, 2013 10:21 AM writes...

Wouldn't having some of the raw data attached make things a fair bit harder to fake? The raw x-ray data or the NMR FID for example? Just have one person solve it as said in the paper and make sure it matches, and boom. You now have to fake a raw data file that still is a valid file, instead of just using photoshop or writing a cif file.

Permalink to Comment

5. John Wayne on July 9, 2013 10:26 AM writes...

@3: I've noticed that when I come down on a poor submission as a reviewer, the editor doesn't assign me anything for a while. I have generally assumed that this is because thoughtfully worded resistance to a putative paper only makes the editor's life harder, but it is possible that that my reward?

Permalink to Comment

6. DLIB on July 9, 2013 10:55 AM writes...

Well sadly the currency of science is publication, the sad part is that i don't mean currency figuratively. High profile papers can get you money, position and alter your career trajectory. I think removing some of the value of the currency in a way that simultaneously benefits science is the way to go. The cost is that it devalues earnest attempts at becoming a famous scientist. Simply mask grant authorship and accompanying biography until after the scientific merit of a proposal (everything scored except INVESTIGATOR) has passed muster. The best scores then get unmasked but only by the Program Officer and for him to determine if the resources are available to carry out the work. This may hurt entrenched professors that are past their prime in generating quality stuff. Successful grants should be able to have longer runways ( possibly up to ten years) so that different avenues that open up can be explored. This would also free up PIs from the grant writing mill, as well as dealing with "trendiness" that naturally occurs in hot new areas. In the end, the laws of nature don't give a rats a$$ who accomplishes the revealing. And we are wasting billions of taxpayer dollars supporting a system that favors cronyism over the best science.

Permalink to Comment

7. Algirdas on July 9, 2013 11:16 AM writes...

Paul Smith, #1:

I don't know, perhaps you work with some particularly poorly behaved proteins, but I think what you wrote is not true. There simply is no need to use literary language like "erratic protein behavior" or "the dark arts of protein purification".

Yes, there are instances when it is hard to perform protein prep reliably - if you are isolating some low abundance thing from a eukaryotic tissue and need to preserve glycosylation, or other posttranslational modifications. Membrane proteins also tend to be no stroll in the park. But there is nothing erratic about proteins. It is the experimenter who is erratic.

"In the protein field, two identical proteins expressed and purified using identical protocols can and often do behave very differently."

- this is simply nonsense. Proteins are chemical substances just like any other. If you make two preps of a protein identically - the preps will have identical properties (again, it may be hard to make them identical, but that is a different matter). No progress in biochemistry could have taken place for the past 60-80 years if this were not the case. We would not have biotech industry, which somehow manages to supply us with hundreds of purified enzymes with reproducible and well defined properties. We would not be using proteins as pharmaceuticals, producing them on an industrial scale. And pretty hard stuff too - just look at different modifications of EPO available on the market.

Myself, I have prepared samples of multi-subunit enzymes, using multi-step protocols - more than once. And somehow, the preps showed nearly identical catalytic activities of the enzymes; and identical NMR spectra.

Permalink to Comment

8. Bryan on July 9, 2013 1:18 PM writes...

I worry about the 20% response rate of the survey inflating the claim that 50% of scientists have experienced problems with data reproducibility. After all, people who have had issues with data reproducibility are much more likely to vent their frustrations on this survey than those who have had no such problems and don't want to take the time to fill out the survey.

While I agree that the reproducibility of many important scientific findings is a problem (the survey places a lower limit at 10% of scientists, which is still an unacceptably high number), the 50% number likely overstates the problem.

Permalink to Comment

9. Graduate Student on July 9, 2013 2:59 PM writes...

As a graduate student who has had to spend an egregious amount of time working through non-reproducible experiments I would applaud this wholeheartedly...

http://retractionwatch.wordpress.com/2013/07/08/time-for-a-scientific-journal-reproducibility-index/

A good mentor makes the world of difference, culture of the lab, culture of the scientist, and culture of the broader program are also important aspects. I would caution against the trend in grad programs to make overly ambitious publication requirements for graduation. That incentive is pushing the wrong way.

It would be nice to see a well done survey, or actual randomized, stratified reproducible initiative in biomedical science. The Nature article on the matter was a sick joke (not naming the experiments attempted) was almost sadly ironic, and the surveys noted here and another high-profile in Infection and Immunity seem to be biased to the point or only being topically informative.

Permalink to Comment

10. franko on July 9, 2013 4:14 PM writes...

@7: Agreed, but what about a situation where another lab takes a stab at replicating your protein purification and analysis, and fails, and publishes a scathing follow-up to your work. How would you feel about that strike at the reputation of your lab? The requirement that you respond to the inability of some lab doing shoddy work to reproduce your own work would bog you down in a costly, drawn-out, debilitating response process. If it was me, I would not appreciate it.

So, who will judge the credibility of the reproducibility hounds? Who will watch the watchers?

For every way to make a system or procedure work, there are ten ways to make it fail. If we give equal credibility to labs which cannot make procedures work, progress will grind to a halt.

I remember as a graduate student, calling up the head of a lab that had published a paper on a protein with an interesting activity. They only had one paper on it, with no follow-up. After getting him on the phone, I realized I had not worked how to ask him my question, so I blurted out, "Is it still true?" He laughed and said, "Do you mean, have we done anything more with it?" Yes, I said. No, he said, we've been studying a different reaction, and proceeded to tell me about some unrelated experiments. I realized he was teaching a novice that, if there are no follow-up papers, something is wrong. No one abandons a successful system, because they are not that easy to come by.

The key question to ask is, "Have you done anything more with that?"

Permalink to Comment

11. Tomas on July 9, 2013 6:24 PM writes...

A lot of the time, the negative reaction may be due to a worry of the result being disproved or found sloppy.

I am guilty of an instance or two of unhelpful or indifferent response to others trying to reproduce my results, and the reason is that I have moved on, and I am busy with other things, and it would be So. Much. Work. to really help them go through it.

I just spent 25% of my time for the last three months reproducing a part of our own result we obtained in haste last year and which needed to be properly reproduced.

Permalink to Comment

12. bank on July 10, 2013 3:00 AM writes...

@Graduate Student,

The Begley report you mention (Nature v483, p531) is ironic for two reasons, the first, as you mention, is that the experiments they performed were not described in *any* detail, and the second is that the experiments they choose to repeat were clearly not selected at random. Taken together, these limitations render their report meaningless. This did not stop it from spawning a whole series of follow-up articles exactly in the way they criticize.

Indeed, the limitations of their study may underlie the skepticism that greeted their claim that 89% of studies were not reproducible. Most scientists expected a certain proportion of studies to be difficult to reproduce, but nothing approaching 89%.

Therein lies the other fascinating aspect of this story: experienced scientists were (maybe) able to correctly assess the likelihood of the Begley study being correct without direct access to the underlying data, likely by using those intangible factors, experience and domain knowledge.

Permalink to Comment

13. IndustrialResearcher on July 10, 2013 8:16 AM writes...

I'd like to relay an experience I had in trying to correct an error I found in the literature. I tried to prepare a compound very similar to one published in a JOC paper. I obtained a 3-membered ring compound rather than the reported 5-membered ring compound. Examination of the proton NMR supplementary material of the publication clearly showed that the literature compound (as well as a few other analogs) also contained a 3-membered ring. While the synthesis of these compounds were tagential to the thrust of the paper, the structural misassignments explained some of their results. (The misassigned compounds did not work as reagents for their reactions - while the correctly assigned compounds did.)

I contacted the professor who was the lead author on the publication and informed him of this. He responded that the graduate student and post-doc that had done the work were gone, but he would contact them. I never heard from him again. There never was a correction published.

Permalink to Comment

14. yeroneem on July 10, 2013 2:18 PM writes...

I think that the community can learn something from X-ray crowd here — in particular that it is MUCH harder to cheat if you have to supply the raw data you used to make the conclusions. The tradition to supply structure factors has stopped in the 80-es and reappears now — since they are really difficult to fake.

It is hard to imagine, however, what can serve as "raw data" in psychology. Anonymized video video of the full study?

Permalink to Comment

15. another process chemist on July 12, 2013 6:20 PM writes...

@5 (John Wayne): I've had the same experience. If I recommend rejection a couple times in a row I don't get any manuscripts coming my way for a while.

Permalink to Comment

16. Pieter on July 16, 2013 5:20 PM writes...

I'm experiencing right now, the stereochemistry presented in the literature cannot be reproduced. Academic papers for me have always been NMT a vague hint that something is perhaps possible.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
Gitcher SF5 Groups Right Here
Changing A Broken Science System
One and Done
The Latest Protein-Protein Compounds
Professor Fukuyama's Solvent Peaks
Novartis Gets Out of RNAi
Total Synthesis in Flow
Sweet Reason Lands On Its Face