Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Conference Travel, With Blogging | Main | From the RSC/SCI Symposium: A Med-Chem Anomaly »

September 12, 2011

The Scientific Literature Gets Kicked Around

Email This Entry

Posted by Derek

It seems that the credibility of the scientific literature has been taking a beating recently. This has come about for several reasons, and through several different motivations. I'll get one of the most important out of the way first - politics. While this has been a problem for a long time, there's been a really regrettable tendency in US politics the last few years, a split across broadly left/right lines. Cultural and policy disagreements have led to many on the left claiming the Dispassionate Endorsement of Settled Science, while others on the right end up complaining that it's nothing of the sort, just political biases given a quick coat of paint. Readers will be able to sort several ongoing controversies into that framework.

Political wrangling keeps adding fuel to the can-we-trust-the-literature argument, but it would still be a big issue without it. Consider the headlines that the work of John Ioannidis draws. And there's the attention being paid to the number of retractions, suspicions of commercial bias in the medical literature, the problems of reproducibility of cutting-edge results, and to round it all off, several well-publicized cases of fraud. No, even after you subtract the political ax-grinding, there's a lot of concern left over (as there should be). There are some big medical and public policy decisions to be made based on what the scientific community has been able to figure out, so the first question to ask is whether we've really figured these things out or not.

A couple of recent articles prompted me to think about all this today. The Economist has a good overview of the Duke cancer biomarker scandal, with attention to the broader issues that it raises. And Ben Goldacre has this piece in The Guardian, highlighting this paper in Nature Neuroscience. It points out that far too many papers in the field are using improper statistics when comparing differences-between-differences. As everyone should realize, you can have a statistically significant effect under Condition A, and at the same time a lack of a statistically significant effect under Condition B on the same system. But that doesn't necessarily mean that the difference between using Condition A versus Condition B is statistically significant. You need to go further (usually ANOVA) to be able to say that. The submission guidelines for Nature Neuroscience itself make this clear, as do the guidelines for plenty of other journals. But it appears that a huge number of authors go right ahead and draw the statistically invalid comparison anyway, which means that the referees and editors aren't catching it, either. This is not the sort of thing that builds confidence.

So the questions about the reliability of the literature are going to continue, with things like this to keep everyone slapping their foreheads. One can hope that we'll end up with better, more reliable publications when all this is over. But will it ever really be over?

Comments (10) + TrackBacks (0) | Category: The Scientific Literature


COMMENTS

1. anon on September 12, 2011 5:39 AM writes...

WRT politics, for the democrat/NYT war on science you just need to consider vaccines. Or the health supplement of the day being fawned over on 60 minutes. All politicians (and their journalist lackeys) are self-serving liars, so there is no reason to hold solely one party up as the paragon of the-evilness-which-bothers-me-personally-today.

Permalink to Comment

2. Gregor on September 12, 2011 7:41 AM writes...

Science without integrity is worthless. No amount of credentials is a substitute for integrity. That is the heart of the matter.

Permalink to Comment

3. simpl on September 12, 2011 9:55 AM writes...

The statistics misinterpretation is about depth of understanding, not morals. Don't you remember Steve Jay Gould's lifetime of preaching about an anthropocentric interpretation of genetics? Haven't you had cases where correlations were confused with causes? If you think the questions through, an isolated speculation can be mentioned in discussion as something for next time. It can aso improve your team's experimental design skills.

Permalink to Comment

4. luysii on September 12, 2011 11:03 AM writes...

#3 Simpl -- It hasn't been resolved as yet, but Steven Jay Gould may turn out to have feet of clay along these exact lines. For details see http://luysii.wordpress.com/2011/06/26/hoisting-steven-j-gould-by-his-own-petard/

Permalink to Comment

5. johnnyboy on September 12, 2011 12:44 PM writes...

@3: I don't think a question of dishonesty was raised for this particular issue (if Goldacre does, he's going a bit too far on his crusade). But Derek is right that this should be caught by reviewers, especially if it's the journal's guidelines for authors - unfortunately I doubt that reviewers in general spend a lot of time checking statistical methodology. From what i've seen, the overall feeling of biology/medical researchers when it comes to statistical testing lies somewhere within the spectrum between annoyance and horror.

Permalink to Comment

6. anchor on September 12, 2011 1:53 PM writes...

#5 ...I believe that reviewers.."misinterpretation is about depth of understanding." I suppose #3 put it nicely!

Permalink to Comment

7. Matt D on September 12, 2011 2:00 PM writes...

#1: False equivalence. Climate change denial and belief in creationism are becoming litmus tests for serious candidacy in the Republican party. I don't see many Democratic politicians proudly proclaiming an anti-vaccination stance as a prerequisite for election.

Permalink to Comment

8. ChemBob on September 12, 2011 2:02 PM writes...

I recently submitted a paper that was rejected for the following reasons(in addition to some legitimate criticisms in which case further experiments or additional research was conducted to clarify and mitigate such concerns)
1. experiments that a reviewer said were inconclusive but that we had never actually run. In this case I agree, the results of such experiments would have been worthless and so were not even tried and were not described.
2. Using compounds that had no connection with the effects being studied i.e. the reveiwer claims that we used compounds that I have never touched except to do chemical inventory and were not mentioned in the actual manuscript.
3. Insufficient reading of the experimental section in my opinion as comments were made indicating a lack of understanding on how the system was actually synthesized.

Look, I know personal bias means that we understand our work better than anyone else and that when people in our field read the work some things appear obvious to us that are not apparent to people outside the field. I admit, the paper wasn't and isn't the newest thing or the highest impacting study to come out in this field, but how can we judge if we don't read these things thoroughly? Is this where scientific publishing is going, I'd like to think not, I still have a long career ahead of me.

Permalink to Comment

9. MIMD on September 12, 2011 9:56 PM writes...

Meta-problem with "contaminated" literature e.g., bu ghostwriting. How do we know what's "real?"

See here.

Permalink to Comment

10. fred on September 14, 2011 5:57 PM writes...

There is nothing recent about these problems. What is recent is that people are beginning to talk about it. I once had a paper rejected because a reviewer said, among other things, that I had used a flawed SAS module for the statistical analysis. The reviewer said that the signs always came out wrong in SAS.

The fact that I could explain why the reviewer was incorrect about the SAS results didn't seem to matter.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
Gitcher SF5 Groups Right Here
Changing A Broken Science System
One and Done
The Latest Protein-Protein Compounds
Professor Fukuyama's Solvent Peaks
Novartis Gets Out of RNAi
Total Synthesis in Flow
Sweet Reason Lands On Its Face