« Conference Travel, With Blogging |
| From the RSC/SCI Symposium: A Med-Chem Anomaly »
September 12, 2011
The Scientific Literature Gets Kicked Around
It seems that the credibility of the scientific literature has been taking a beating recently. This has come about for several reasons, and through several different motivations. I'll get one of the most important out of the way first - politics. While this has been a problem for a long time, there's been a really regrettable tendency in US politics the last few years, a split across broadly left/right lines. Cultural and policy disagreements have led to many on the left claiming the Dispassionate Endorsement of Settled Science, while others on the right end up complaining that it's nothing of the sort, just political biases given a quick coat of paint. Readers will be able to sort several ongoing controversies into that framework.
Political wrangling keeps adding fuel to the can-we-trust-the-literature argument, but it would still be a big issue without it. Consider the headlines that the work of John Ioannidis draws. And there's the attention being paid to the number of retractions, suspicions of commercial bias in the medical literature, the problems of reproducibility of cutting-edge results, and to round it all off, several well-publicized cases of fraud. No, even after you subtract the political ax-grinding, there's a lot of concern left over (as there should be). There are some big medical and public policy decisions to be made based on what the scientific community has been able to figure out, so the first question to ask is whether we've really figured these things out or not.
A couple of recent articles prompted me to think about all this today. The Economist has a good overview of the Duke cancer biomarker scandal, with attention to the broader issues that it raises. And Ben Goldacre has this piece in The Guardian, highlighting this paper in Nature Neuroscience. It points out that far too many papers in the field are using improper statistics when comparing differences-between-differences. As everyone should realize, you can have a statistically significant effect under Condition A, and at the same time a lack of a statistically significant effect under Condition B on the same system. But that doesn't necessarily mean that the difference between using Condition A versus Condition B is statistically significant. You need to go further (usually ANOVA) to be able to say that. The submission guidelines for Nature Neuroscience itself make this clear, as do the guidelines for plenty of other journals. But it appears that a huge number of authors go right ahead and draw the statistically invalid comparison anyway, which means that the referees and editors aren't catching it, either. This is not the sort of thing that builds confidence.
So the questions about the reliability of the literature are going to continue, with things like this to keep everyone slapping their foreheads. One can hope that we'll end up with better, more reliable publications when all this is over. But will it ever really be over?
+ TrackBacks (0) | Category: The Scientific Literature
POST A COMMENT
- RELATED ENTRIES
- The Palbociclib Saga: Or Why We Need a Lot of Drug Companies
- Why Not Bromine?
- Fragonomics, Eh?
- Amicus Fights Its Way Through in Fabry's
- Did Pfizer Cut Back Some of Its Best Compounds?
- Don't Optimize Your Plasma Protein Binding
- Fluorinated Fingerprinting
- One of Those Days