The topic of scientific reproducibility has come up around here before, as it deserves to. The literature is not always reliable, and it's unreliable for a lot of different reasons. Here's a new paper in PLOS ONE surveying academic scientists for their own experiences:
To examine a microcosm of the academic experience with data reproducibility, we surveyed the faculty and trainees at MD Anderson Cancer Center using an anonymous computerized questionnaire; we sought to ascertain the frequency and potential causes of non-reproducible data. We found that ~50% of respondents had experienced at least one episode of the inability to reproduce published data; many who pursued this issue with the original authors were never able to identify the reason for the lack of reproducibility; some were even met with a less than “collegial” interaction.
Yeah, I'll bet they were. It turns out that about half the authors who had been contacted about problems with a published paper responded "negatively or indifferently", according to the survey respondents. As to how these things make it into the literature in the first place, I don't think that anyone will be surprised by this part:
Our survey also provides insight regarding the pressure to publish in order to maintain a current position or to promote ones scientific career. Almost one third of all trainees felt pressure to prove a mentor's hypothesis even when data did not support it. This is an unfortunate dilemma, as not proving a hypothesis could be misinterpreted by the mentor as not knowing how to perform scientific experiments. Furthermore, many of these trainees are visiting scientists from outside the US who rely on their trainee positions to maintain visa status that affect themselves and their families in our country.
And some of these visiting scientists, it should be noted, come from backgrounds in authority-centered and/or shame-based cultures, where going to the boss with the news that his or her big idea didn't work is not a very appealing option. It's not for anyone, naturally, but it's especially hard if you feel that you're contradicting the head of the lab and bringing shame on yourself in the process.
As for what to do about all this, the various calls for more details in papers and better reviewing are hard to complain about. But while I think that those would help, I don't see them completely solving the problem. This is a problem of human nature; as long as science is done by humans, we're going to have sloppy work all the way up to outright cheating. What we need to do is find ways to make it harder to cheat, and less rewarding - that will at least slow it down a bit.
There will always be car thieves, too, but we don't have to make it easy for them, either. Some of our publishing practices, though, are the equivalent of habitually walking away with the doors unlocked and the keys in the ignition. Rewarding academic scientists (at all levels) so directly for the number of their publications is one of the big ones. Letting big exciting results through without good statistical foundations is another.
In this vein, a reader sends along the news that the Reproducibility Initiative is now offering grants for attempts to check big results in the literature. That's the way to get it done, and I'm glad to see some money forthcoming. This effort is concentrating on experimental psychology, which is appropriate, given that the field has had some recent scandals (follow-up here) and is now in a big dispute over the reproducibility of even its honestly-meant data. They need all the help they can get over there - but I'll be glad to see some of this done over here in the biomedical field, too.