About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« The Portable Chemist's Consultant | Main | A Dumb Proposal for the NSF »

April 26, 2013

Research Fraud, From A Master Fraud Artist

Email This Entry

Posted by Derek

A couple of years back, I wrote about the egregious research fraud case of Diederick Stapel. Here's an extraordinary follow-up in the New York Times Magazine, which will give you the shivers. Here, try this part out:

In one experiment conducted with undergraduates recruited from his class, Stapel asked subjects to rate their individual attractiveness after they were flashed an image of either an attractive female face or a very unattractive one. The hypothesis was that subjects exposed to the attractive image would — through an automatic comparison — rate themselves as less attractive than subjects exposed to the other image.

The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me. . .

. . .Doing the analysis, Stapel at first ended up getting a bigger difference between the two conditions than was ideal. He went back and tweaked the numbers again. It took a few hours of trial and error, spread out over a few days, to get the data just right.

He said he felt both terrible and relieved. The results were published in The Journal of Personality and Social Psychology in 2004. “I realized — hey, we can do this,” he told me.

And that's just what he did, for the next several years, leading to scores of publications and presentations on things he had just made up. In light of that Nature editorial statement I mentioned yesterday, this part seems worth thinking on:

. . . The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.

The adjective “sloppy” seems charitable. . .

It may well be. The temptation of spicing up the results is always there, in any branch of science, and it's our responsibility to resist it. That means not only resisting the opportunities to fool others, it means resisting fooling ourselves, too, because who would know better what we'd really like to hear? Reporting only the time that the idea worked, not the other times when it didn't. Finding ways to explain away the data that would invalidate your hypothesis, but giving the shaky stuff in your favor the benefit of the doubt. N-of-1 experiments taken as facts. No, not many people will go as far as Diederick Stapel (or could, even if they wanted to - he was quite talented at fakery). Unfortunately, things go on all the time that might differ from him in degree, but not in kind.

Comments (27) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature


1. Bob P. in Fort Walton Beach, FL on April 26, 2013 12:02 PM writes...

If there was just one idea from the sciences that I would like to see taught over and over, just one that I wish everyone would understand and use, it's the idea that you can't just disregard something that doesn't fit your hypothesis. And come to think of it, that doesn't just apply to running experiments and analyzing data.

Permalink to Comment

2. NMH on April 26, 2013 12:59 PM writes...

"Reporting only the time that the idea worked, not the other times when it didn't." Very common in scientific publishing at least in my field(Biochem/Mol Bio), as you can really only can publish positive results, and the presentation of any negative results or discordant results give reviewers something to pick your work apart.

"Finding ways to explain away the data that would invalidate your hypothesis, but giving the shaky stuff in your favor the benefit of the doubt."
Also very common in my field to present ad-hoc hypothesis, and, what my advisor called "hand-waving." to find some mechanistic rationale for what you are seeing.

Standards for getting published in my field are super high, because reviewers think that ALL of the data presented must fit a hypothesis. But its pretty unusual when all the data collected to support a hypothesis does this because of variablity in the model systems that you use to test the hypothesis (for example, differences between cell lines, differences between cell lines and primary cells, etc.)

Permalink to Comment

3. Pete on April 26, 2013 2:06 PM writes...

Data-analytic problems can arise in Drug Discovery when the analysts have what might be termed a competing interest in the analysis arriving at particular conclusions. For example, scientists at a company might perform analysis in which they want to show that their company's technology for finding leads is superior to competitor technology. If they can't find what they're looking for, they may keep trying something else. When assessing data analysis it is often instructive to consider how the analysts would have lost or gained according to the different conclusions at which the analysis might have arrived.

Permalink to Comment

4. Curious Wavefunction on April 26, 2013 2:11 PM writes...

Pete: As you probably know this can also be true of software companies (and it's not even limited to drug discovery). Every company naturally wants to demonstrate the superiority of its software and this often affects the choice and structure of their training sets: basically you keep trying different training sets until you find one for which you get a good correlation. Then you advertise your software as being trained on a "diverse" training set.

Permalink to Comment

5. Pete on April 26, 2013 2:27 PM writes...

Ash, Like they say in the winter flu season in the UK, "lot of it about".

Permalink to Comment

6. emjeff on April 26, 2013 2:44 PM writes...

Although the sift science are probably more ameneable to this kind of thing, plenty of it goes on. More dangerous than outright fraud, though, is the behavior alluded to in #2. I see this all the time - people trying to ignore one piece of data because by doing that they get the results they want, for example. This goes on a lot.

What I wish we would train young scientists to do is to realize that, being human, they can't be truly objective, and therefore they must learn to manage their biases. This seems a lot more realistic than telling someone:" Be objective and dis-interested".

As for the outright frauds, I am afraid there is not much to be done with them. There are always going to be crooks...

Permalink to Comment

7. ptm on April 26, 2013 2:57 PM writes...

Reading psychology articles led me to believe that level of scientific rigor was pretty standard there.

Permalink to Comment

8. NAP on April 26, 2013 4:05 PM writes...

In grad school my entire thesis was pretty much a study in non-significant results and as a result very little was/is publishable because most of it was negative results. So there is 5 years of quality work that will never been seen by others in my field. Who knows how many time my work may have been repeated and how much resources wasted.

I also have only 1 publication which was based on an addendix which doesn't give employers lots of confidence when looking for jobs. Maybe I should have made results up

Permalink to Comment

9. NMH on April 26, 2013 4:26 PM writes...

One good thing about hiring someone with a not-so-stellar publication record: at least you know he is probably not a cheater

Permalink to Comment

10. Nap on April 26, 2013 7:25 PM writes...

Yeah, that was the running joke in about me in lab. You knew I was honest because only an idiot would falsify my results

Permalink to Comment

11. Gordon Walker on April 27, 2013 4:24 AM writes...

"The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.”
I believe it started with Freud.

Permalink to Comment

12. dearieme on April 27, 2013 6:42 AM writes...

Oh well, he should find a job pretty easily in Climate Science.

Permalink to Comment

13. johnnyboy on April 27, 2013 7:10 AM writes...

Good science is hard, for many reasons, but a big one is that it requires ironclad internal ethics, unlike other fields (business, say). And a researcher's ethical sense can often be at odds with his financial/career interest. Humans being humans, i'd say cases that come to light like the above are a tiny tip of a very large iceberg.
And as far as the snarkiness in the comments about psychology and 'soft science', I believe chemists have plenty of similar abuses to pick from in their own field.

Permalink to Comment

14. bank on April 27, 2013 1:16 PM writes...

@ NAP,

Your thesis normally counts as a publication. Many university libraries are now allowing theses to be indexed by Google Scholar. You should see if that can be done for your thesis, potentially saving someone else from repeating your fate.

Permalink to Comment

15. Scarodactyl on April 27, 2013 3:13 PM writes...

"The adjective 'sloppy' seems charitable..."
As perhaps does the noun 'science.'

Permalink to Comment

16. Anonymous on April 28, 2013 3:40 PM writes...

Maybe this explains why I don't seem to be getting too much out of therapy........but at least my psychologist is honest and, quite frankly, really attractive. I'd rather spend an hour talking to her than about 95+% of the medicinal chemists I've worked with over the past several decades.

Permalink to Comment

17. Anonymous on April 28, 2013 8:31 PM writes...

From the Sunday TImes Magazine article which led to this post of Derek's

"The professor, who had been hired recently, began attending Stapel's lab meetings. He was struck by how great the data looked, no matter the experiment. "I don't know that I ever saw that a study failed, which is highly unusual," he told me. "Even the best people, in my experience have studies that fail constantly. Usually, half don't work."

Exactly ! ! ! !

This is from a post about how infants learn language a few years ago.

"They used functional MRI (fMRI) to pick this up. But beware. I have problems with all fMRI work because the authors invariably seem to find exactly what they expected, and the raw data is never given — science just doesn’t work like that. This type of work is often NOT reproducible. In fact, some of it has been called pseudocolor phrenology."

So beware of pretty pictures showing parts of the brain lighting up on fMRI in response to some particular stimulus, or cognitive task.

Permalink to Comment

18. Sam Adams The Dog on April 28, 2013 11:03 PM writes...

@11 Actually, it started with Mendel.

Permalink to Comment

19. Lighter Fluid on April 29, 2013 3:21 AM writes...

@6 emjeff - I wholeheartedly agree with your sentiment - the best we can do is manage our biases and try not to create perverse incentives. Sadly, that's easier said than done.

I only came across biases when getting into investing and behavioral finance, and found a lot of what I learned there was very relevant to doing research science. Now, I see confirmation bias everywhere I look! (sorry).

A question for the world, is this issue of 'sloppy science' and outright fraud becoming a bigger problem now (due to increased competition for funding; the structure of the publishing industry; and/or the increasing complexity of the problems being investigated), or are we only just becoming more aware of it due to better ethical policies and practices (ie less cover-ups)?

I'm hopeful that the latter explanation holds, however I hold a deep suspicion that what we are seeing in the sciences is merely a reflection of the state of the society in which we live - it's all about 'playing the game' - your pay packet determines your moral code, scruples and collective responsibilities only hold you back.

Permalink to Comment

20. MTK on April 29, 2013 6:30 AM writes...

It's human nature to believe what you want to believe. So fighting that tendency is what makes a good scientist as opposed to an average scientists. It's not easy. I'm not going to beat people up hard for non-ill intentioned bias. Fraud is another matter entirely.

Permalink to Comment

21. Anonymous BMS Researcher on April 29, 2013 6:58 AM writes...

The hard reality of science is that nature does not care what I think. The correlation between my own beliefs about some hypothesis and the likelihood of that hypothesis being correct is weak at best. I would think lot of bad science (by which I mean it's somewhere on the spectrum from hasty conclusions to out-and-out fraud) is fundamentally self-deception because somebody has an intense belief that some hypothesis MUST be true.

Permalink to Comment

22. Doug Steinman on April 29, 2013 7:51 AM writes...

Sometimes in science you run into a situation where you end up banging up against a wall because you don't get the data you desire. At that point, you have a choice to either admit that your hypothesis was wrong and go on to something else or to falsify data. Those of us who have the correct level of integrity drop our tail between our legs, take away anything useful that we learned and go on to something else. Science without integrity is bad science and it doesn't matter much if it is "soft" science.

Permalink to Comment

23. Bear on April 29, 2013 8:30 AM writes...

ptm on April 26, 2013 2:57 PM writes "Reading psychology articles led me to believe that level of scientific rigor was pretty standard there."

Yep. I have a friend with a doctorate in research psychology, who worked at a large midwestern university. That person finally dropped out of the field completely in sheer disgust at what constantly passed for 'science', opting to make high quality custom candy (which actually requires careful measurement and technique).

Permalink to Comment

24. Anonymous on April 29, 2013 8:31 AM writes...

I think this is one area where science as practiced in industry is different from academic science.
Freed from the pressure to 'publish or perish', scientists in industry usually have no hesitation to stand up at team meetings and present negative data, or inconclusive data -- I see it happening all the time in my job.
Industry scientists know that their results will usually be acted upon by other scientists in the department to advance a specific project, so negative results are just as important as positive ones. And they have to face those colleagues every day at work, so they know that they cannot keep trying to fool them!
For an academic scientist, often the project ends with the publication of a paper on the "correlation of facial expressions to fMRI images" or whatever, and no one else in the world will ever do a follow-up experiment based on those results.

Permalink to Comment

25. Anon on April 29, 2013 9:06 AM writes...

Data integrity is a very broad sword. In my career in drug discovery I have seen and been impacted by lapses in data integrity.

I've seen cherry picked data presented showing very nice almost straight line fits, when they made the mistake to use my data set as the source. I had done the same analysis myself and with all the data its a scatter plot. This "edited" data could have led a lead op program down the garden path.

Even what may seem as tiny lapses in data integrity have had huge impacts. For example, bumped up yields and purities have resulted in, shall we say, scale-ups falling short.

Permalink to Comment

26. Lyle Langley on April 29, 2013 9:14 AM writes...

My favorite line of the article:

...Zeelenberg said, "Then I'll find out if he and I are capable of having a friendship. I miss him, but there are equal amounts of instances when I want to punch him in the face."

Permalink to Comment

27. Nap on April 29, 2013 7:13 PM writes...

It's all just a giant cat and mouse game between the people billing (drugs and providers) and the people paying (insurers and HHS). They charge 10k for a treatment and they get paid 2k and in the end both are happy with it. The one really getting screwed are the uninsured who have to pay 9.5k after the "generous" self pay discount

Permalink to Comment


Remember Me?


Email this entry to:

Your email address:

Message (optional):

The Last Post
The GSK Layoffs Continue, By Proxy
The Move is Nigh
Another Alzheimer's IPO
Cutbacks at C&E News
Sanofi Pays to Get Back Into Oncology
An Irresponsible Statement About Curing Cancer
Oliver Sacks on Turning Back to Chemistry