Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« R&D Is For Losers? | Main | Hedgehogs and Foxes Holding Erlenmeyer Flasks »

April 12, 2011

Scientific Fraud: How Often and How Much?

Email This Entry

Posted by Derek

A new paper in PLoSOne goes over the existing studies that have tried to put a number on how many scientists falsify data (or have done so at least once) or commit other scientific offenses (ranging from the quite grave to the pretty questionable).

For what it's worth, the meta-analysis comes out with a figure of about 2% of scientists admitting that they've fabricated, falsified, or modified data. Of course, that group itself is a wide one, and deserves to be broken into various levels (which is just what Dante ended up doing, come to think of it, for similar reasons). To my mind, people who are modifying data want to make the numbers look better than they are, and people who are falsifying data want to make the numbers just flat-out say things that they don't say. And the far end of that process is fabrication, where you give up on tweaking and bending and processing, and just make the stuff up. As the PLoS paper says, you slide along from what could be explained as carelessness all the way to what can only be described as blatant fraud.

There are, of course, a lot of difficulties in getting good numbers on this sort of thing, and the whole purpose of this meta-analysis was to try to set a lower bound. There are limits to what people will admit, and limits in how objectively they see their own behavior:

The grey area between licit, questionable, and fraudulent practices is fertile ground for the “Mohammed Ali effect”, in which people perceive themselves as more honest than their peers. This effect was empirically proven in academic economists and in a large sample of biomedical researchers (in a survey assessing their adherence to Mertonian norms, and may help to explain the lower frequency with which misconduct is admitted in self-reports: researchers might be overindulgent with their behaviour and overzealous in judging their colleagues. In support of this, one study found that 24% of cases observed by respondents did not meet the US federal definition of research misconduct.

There's another interesting possibility raised:

Once methodological differences were controlled for, cross-study comparisons indicated that samples drawn exclusively from medical (including clinical and pharmacological) research reported misconduct more frequently than respondents in other fields or in mixed samples. To the author's knowledge, this is the first cross-disciplinary evidence of this kind, and it suggests that misconduct in clinical, pharmacological and medical research is more widespread than in other fields.

He goes on to speculate whether this is due to financial pressures, or different levels of self-awareness or self-reporting. And this brings up another reaction I had to the whole paper, for which I'll have to go back to its introduction:

The image of scientists as objective seekers of truth is periodically jeopardized by the discovery of a major scientific fraud. . .A popular view propagated by the media and by many scientists sees fraudsters as just a “few bad apples”. This pristine image of science is based on the theory that the scientific community is guided by norms including disinterestedness and organized scepticism, which are incompatible with misconduct. Increasing evidence, however, suggests that known frauds are just the “tip of the iceberg”, and that many cases are never discovered. The debate, therefore, has moved on to defining the forms, causes and frequency of scientific misconduct.

I wonder about some of that. Is the image of science really as pristine as all that, at this date? And does the media really help to propagate such a view? I think that the real world is quite a bit messier. I would guess that you'd have to go back to the 1950s (or perhaps before the Second World War) to find a solid majority of people thinking that scientists were pretty much all pristine truth-seekers, and perhaps not even then. And as for media depictions of scientists, those have been mixed for a long time now.

I think that you'll definitely find more objective truth-seeking in the physical sciences than you'll find in most other human endeavors, but science is done by humans with all the failings that humans come equipped with (and it's quite a list). One should always be open to some possibility of misconduct in any field and any situation; lying is one of the things that people do. That's not to condone it, of course - but being shocked by it doesn't seem to be too useful, either.

Comments (35) + TrackBacks (0) | Category: The Dark Side


COMMENTS

1. MTK on April 12, 2011 11:42 AM writes...

Derek,

I agree with your scepticism at the premise cited in the introduction.

The scientist as disinterested, unbiased, seeker of truth is not only a myth, but the idea that this is the perception of scientists today is also a myth.

One of the most disturbing aspects of modern society is the distrust of science and scientists. Whether it be the anti-vaccine crowd, the cancer cure conspiracists, or the alternative medicine/herbalists pushers, they're all believed because the public doesn't trust science or scientists.

Reports of fraud and misconduct, therefore, do not somehow shatter the image of scientists, but rather reinforce the public's image.

And that makes sense. If the "Mohammed Ali effect" is genuine then why should the general public think scientists are more honest than themselves?

Permalink to Comment

2. Fishy Fish on April 12, 2011 11:56 AM writes...

It is about pressure and getting ahead, be it for a graduate student, post-doc or faculty member.
In an earlier post, Derek talked about how many publications can be duplicated. I guess that some of frauds are committed because the fraudsters believe that nobody would try to repeat their data and they could get away with it.
One example I personally witnessed, some of you might remember a case in Ron Breslow's lab at Columbia back in the late 80's - a catalytic chlorination of a steroid. I was at the department at that time and knew the grad student who did this. It wasn't found out until a chemical company was thinking about licensing the technology but couldn't repeat it in their own lab. I was not sure if the licensing deal were not on the table, the fraud would be detected or not.

Permalink to Comment

3. anon the II on April 12, 2011 11:58 AM writes...

The real fraud is these "researchers" looking into fraud and calling it research. I hope my tax dollars aren't paying for this. They don't have a clue about error analysis. They have an N of 7 and quote a percentage to 3 places. Please!

Permalink to Comment

4. Curious Wavefunction on April 12, 2011 11:58 AM writes...

As Caltech physicist David Goodstein says in his new book "On Fact and Fraud", in some sense all of us scientists commit fraud all the time. We present our best results as "typical" ones and write sanitized papers that make what is a pretty haphazard and illogical process full of insights from hindsight appear as a pristinely logical one. We are all hypocrites in one sense. But that's how science is.

Permalink to Comment

5. partial agonist on April 12, 2011 12:07 PM writes...

I think that there is more fraud in areas where the variability is large enough that there is a chance it won't be easily noticed. If you fake an important chemical reaction, it will definitely get re-run and you will be exposed. You may fudge the yield a little and get away with it, but that's about it.

But... if you have a new analgesic in a bottle and you are a pharmacologist and you really want it to work, you even fully EXPECT it to work, it's pretty darn easy to even unintentionally bias your data. "Oh, that mouse didn't respond-- well, come to think of it, the cathereter doesn't look quite right, so I'll ignore that animal."

Permalink to Comment

6. Rick on April 12, 2011 12:24 PM writes...

Maybe I'm being overly semantic, but I'm see a blurring of the distinction between "science" and "scientists", as in saying "I don't believe what science says about..." when perhaps saying "I don't believe what scientist X says about..." All humans have flaws. Scientists are human. Ergo, scientists have flaws. On the other hand, science, through the Scientific Method, has a powerful tool to rise above the failings of its practitioners, which is more than I can say about religion or economics, which actually do stipulate that people are or can be flawless.

Permalink to Comment

7. paul on April 12, 2011 12:34 PM writes...

@4

"But that's how science is."

It is exactly that shoulder shrugging attitude that propagates the hypocrisy we are all supposedly guilty of.

Permalink to Comment

8. milkshake on April 12, 2011 12:40 PM writes...

as you know impartial critical examination of data and scientific integrity ideal is quite in contrast with advertisement and other methods of salesmanship. I have observed that when research results are hyped up for the benefit of investors, there is increasing wishful thinking element that can gradually cause the research quality become garbage and the lunacy fever spreads throughout management.

In early 90s I once knew a biology director who commanded tremendous resources at our company and build up his department and his projects to the detriment of everyone else, by claiming that his high-throughput screening methodology was giving great data. The management was all too happy to share the good news with the investors: you see we were a small company so - it was wonderful to present slick slides that we have this powerful core technology. Unfortunately our poor guy never really addressed the high background problem of his fluorescence-based assays and he kept presenting increasingly more blatant hype while feverishly trying to get the bugs out, and he never succeeded. The game was up only two years when someone finally ratted him out (that the data was being improved by hand) and it was already impossible to ignore that every single result coming from that group could not be verified by independent assays, that the combined output of a large number of people turned out to be 100% false positive screening hits.

Permalink to Comment

9. xyz on April 12, 2011 12:47 PM writes...

to the ones who make their yields, "dr"s, "ee"s and whatsoevers look better...

On the Practical Limits of Determining Isolated Product Yields and Ratios of Stereoisomers: Reflections, Analysis, and Redemption

Martina Wernerova, Tomas Hudlicky*
https://www.thieme-connect.com/ejournals/abstract/synlett/doi/10.1055/s-0030-1259018

Permalink to Comment

11. Pete on April 12, 2011 1:20 PM writes...

Some years ago I investigated whether using a polarisation descriptor with electrostatic potential led to significantly better predictions of hydrogen bond basicity than using electrostatic potential by itself (it did not). However, the short manuscript that I prepared to share this result was not well received. One of the reviewers (yes, one of the reviewers!) stated, "This journal does not publish negative results". As I wrote to the editor, "I have asked a relevant question honestly and can only report what I have found" and the work, submitted in 1996, remains unpublished. Could it be that the journals share some of the blame when fraud is committed?

Permalink to Comment

12. Hap on April 12, 2011 1:27 PM writes...

#4: Aren't models themselves simplifications of a set of messy facts? We expect that a model won't be perfect, but that it will summarize as best it can what exists and an explanation for it. It's imperfect, but a set of data that can't be easily apprehended isn't of much use to anyone, at least to guide what they might do. ("All models are wrong, but some models are useful.") Papers should do something similar, and the expectation that they will contain the wholeness and messiness of a research project seems contradictory to what people read them for.

7: Anything predicated on the expectation that people will be angels doesn't usually work out so well. Better to understand that people will do wrong and try to know when they've done it, to set up mechanisms to limit its attractiveness, and then nail people (proportionately) when you catch them at it. People won't do anything if they have to do it perfectly, and even then they'll still do wrong.

Permalink to Comment

13. Virgil on April 12, 2011 2:01 PM writes...

I think in large part the blame for this has to come down to the lack of a uniform system to address fraud when it is found. Two examples from my own work...

1) Reviewed a paper. Obvious fraud, beyond any reasonable doubt. Reported it to editors. Editors said "we'll look into it". 3 months later paper is published, as is, with fraudulent data intact. E-mail to me on the same day said "we looked into it and didn't find any evidence for fraud". Journal now refuses to print my letter-to-the-editor calling the data into question.

2) Reviewed a grant for a non-government agency. Grant contained fabricated data. Reported it. Agency said "we have to get a unanimous vote by the whole grant review panel in order to move dforward with an ethical investigation". if one person on the panel votes no, it can't be deemed fraud. Never going to happen with a panel of 25 people, at least one of whom might be friends with the grant author. All materials were sent by me to the NIH office of research integrity. Now awaiting a response from them.

So, the problem is, even when this stuff is discovered, and even when people like me are willing to go out on a limb and blow the whistle, the process for actually investigating and dealing with these things in a fair and balanced way, just does not exist. One takes a serious risk by reporting fraud (especially if the conductor is a big lab in your field), and a lot of fraud goes un-reported because people are too scared to jeopardize their own careers by reporting it.

Permalink to Comment

14. Iroquis on April 12, 2011 2:13 PM writes...

I think my own perspective on the subject of the 'scientist' is from a rather more naive state than the majority of others here. I'm in my final year of an undergrad degree and I've only just recently become aware of fraud in science (In the Pipeline has played a part here). That right; before this I didn't even know science was tainted by fraud- looking back, I think I was being amazingly naive. Regardless, prior to coming to uni and reading up on this, I held a view of scientists as being infallible beings. And I'm inclined to think thats how many young people see it. I think the way in which prolific scientists are venerated has influence here. I also think its got a lot to do with free-thinking and a persona which tends to question. In summary, I think teaching people to question more would solve inappropriate exaltation of scientists (among other things). Indeed, questioning is part of scientific discourse and Santiago Cajal, in his book 'Advice for a young investigator' teaches that we need to throw away the veneration of other scientists, before we can begin our own studies.

Permalink to Comment

15. gyges on April 12, 2011 2:31 PM writes...

A particularly obnoxious piece of fraud/sabotage would be to spike someone's reaction mixture with the desired compound.

Imagine a chemist trying to get a reaction to work by repeating reactions with different reagents under different conditions. Now, imagine if one of these reaction mixtures was spiked with the desired product. The chemist would work it up and unwittingly report a fraudulent result which he could not repeat.

Painful to think about isn't it?

(Note: nobody does a mass balance on a mg scale and any side products, unused reaction mixture would be separated in the work up).

Permalink to Comment

16. anon the II on April 12, 2011 2:32 PM writes...

The real fraud is these "researchers" looking into fraud and calling it research. I hope my tax dollars aren't paying for this. They don't have a clue about error analysis. They have an N of 7 and quote a percentage to 3 places. Please!

Permalink to Comment

17. billswift on April 12, 2011 3:22 PM writes...

Steven Milloy made several of those points a decade ago in Junk Science Judo.

Last January in a discussion on social "science", I commented:

I think it is more: Complication allows the researchers' biases to slip in more easily, since among other things any sort of cross-check is nearly impossible, which leads to softer results, especially when being evaluated by someone with different biases.

And yet another factor is that, like police who are framing a suspect, fraud starts with someone who just knows he is right, and starts tweaking the data to show it. Whether he goes past tweaking largely depends on how self-aware he is - whether he notices what he is doing and stops it. (Also, weakly, on whether he actually was right, and later experiments supported it without tweaking - it does happen.)

Permalink to Comment

18. dearieme on April 12, 2011 3:39 PM writes...

The first scientific fraud I came across was finally got rid off - but since this was in an Ancient University, it was classified as ill-health retirement. Mind you, it wasn't the fraud that cost him his job, but the theft and blackmail.

Permalink to Comment

19. Fishy Fish on April 12, 2011 4:10 PM writes...

@13
If an offender is a grad student or postdoc in an academic setting, he/she would probably be given every chance to prove his/her innocence after a fraud is alleged. In the case I mentioned earlier (posting #2), that student was given the opportunity to duplicate the finding at Columbia or at any other lab in the country. But the student chose leaving the graduate program instead.

Permalink to Comment

20. Hands Are Tied on April 12, 2011 4:17 PM writes...

@ #13 Virgil. I too have found fabricated data. The authors were contacted, and as the corresponding author can no longer critically analyze the data set, he/she had to rely on the word of the lead author (who in this case had lied). Since I've already contacted the corresponding author (who believes the false testimony of the lead author), and I cannot further jeopardize my own future, I feel that I cannot proceed further with this. This is a shame because the lead author will be awarded a very unmerited PhD.

Based on your experience with reporting fraud, is there some sort of independent non-journal affiliated (maybe governmental) agency/website that receives anonymous peer review, and then would move forward with submitted valid arguments without putting the whistle-blower at risk? Thanks.

Permalink to Comment

21. Mark on April 12, 2011 4:49 PM writes...

As for the question if scientists are still regarded as "disinterested seekers of the truth", I'd say it's true. I remember a poll that Pfizer sent around a few years back in which the public ranked scientists as the "most highly regarded profession".

Mark

Permalink to Comment

22. MIMD on April 12, 2011 10:22 PM writes...

I'll never forget the Yale professor who asked me to "modify" my experimental data on a new prototype for information retrieval to "come closer" to his own older and very different prototype data retrieval engine.

He even drew Venn diagrams showing me that my results and his, which did not intersect (mine were far superior) needed to be "closer."

He was still getting funding for his own work.

I was not one of those 2%; I simply refused, but unfortunately ended up not publishing the work.

Permalink to Comment

23. Confoundman on April 12, 2011 11:15 PM writes...

I would be interested in the social/psychological reasons. Career wise many people stress out trying to publish, not so they can gain something (financially), but just so that they don't lose what they have worked their entire lives to get. As PhDs are over produced, you will have over worked people trying to put out more exciting results, namely postdocs and grad students. Without their papers their career as a research scientist is over.

Permalink to Comment

24. not bob on April 12, 2011 11:56 PM writes...

#9 - I was not really convinced by the Hudlicky paper. Take for example the claim that chromatography (with fraction collection) results in only 98.5% recovery. This value, it seems to me, would be extremely dependent on the specific technique of the individual chemist involved. Maybe if the chemist involved simply rinsed their fraction tubes one more time the recovery would have been 99.5%. Without knowing precisely what techniques were used for material transfer, I cannot judge this result.

Lets say though that this value is accurate and reproducible between multiple chemists in multiple settings. Where did that 1.5% mass go? It didn't vanish into the proverbial ether. If it was not lost in transfers (which should be avoidable with better technique), it presumably was permanently adsorbed onto the column (possibly with degradation). That process, it seems to me, is going to be highly substrate dependent. Perhaps then some substrates can be chromatographed in 99.5 yield. On the basis of this paper, we'll never know...

Permalink to Comment

25. Donough on April 13, 2011 2:47 AM writes...

Lets not forget that also bad sicence can also be considered and that is a very gray area.

Permalink to Comment

26. Rob on April 13, 2011 8:18 AM writes...

One of the maxims in protein crystallography is Paul Cigler's "this is good enough to be typical" ;-). Actually as one of those damned academics, one of the hardest things to teach the students is to accept the data they get, which quite often doesn't support the brilliant hypothesis they are working on. Understanding control experiments etc, is not natural to most people, nor is the degree of intellectual honesty that you need to have to be a good scientist. (the later causes no end of problems when dealing with non-scientists).

Permalink to Comment

27. Rick on April 13, 2011 8:40 AM writes...

Rob, #26,
"Understanding control experiments etc, is not natural... nor is the degree of intellectual honesty that you need to have to be a good scientist. (the later causes no end of problems when dealing with non-scientists)."

AMEN to that!!! I'd say it's a big problem even when talking with scientists. Based on my limited experience teaching high school chemistry, these are also things that our schools either: 1, do not teach; 2, teach inadequately; 3, teach exactly the opposite to the way it is. It's the most important thing science classes should provide if you believe the purpose of pre-college education is supposed to produced citizens capable of making informed personal and civic choices.

Permalink to Comment

28. pete on April 13, 2011 12:54 PM writes...

@4 Curious Wavefunction
WRT cherry picking of "the best result", I've certainly seen it and have been guilty of it myself over the years. And I believe the wider research community has tacitly admitted as much because - using the better molecular biology journals as an example - it's now far more expected that authors will report data using the appropriate statistical analysis. That's quite a change from when I was coming up.

Permalink to Comment

29. newnickname on April 13, 2011 3:44 PM writes...

#22 MIMD: Sounds like you were bahramed AND nulltipled. (Bahramdipity and Nulltiple Scientific Discoveries, Science and Engineering Ethics, 2001, 7(1), 77–104. http://www.bmartin.cc/dissent/documents/Sommer.pdf Also relevant to #13 Virgil and others.)

Permalink to Comment

30. Cartesian on April 14, 2011 3:32 AM writes...

A truth is also that scientists have to take care to not be abused, because it is true that one deserves relatively to his/her virtues and talents (see article 6 of the French human rights of 1789).

Permalink to Comment

31. Anonymous on April 16, 2011 5:01 PM writes...

Although I wasn't aware of any blatant cases of research misconduct among my grad school classmates, I do remember there was a strong culture of silence when it came to hushing up anything that might embarrass the department. I'm not one bit surprised that some of the other commenters in this thread ran into the exact same problem when they tried to report their concerns to journal editors and other authorities.

Permalink to Comment

32. whattodo on June 29, 2011 1:09 PM writes...

I have a problem. I know modifying data or selecting good data is very widespread. Sometimes it is justified by the idea that experiments simply do not work sometimes and so what appears to be a negative result is actually an indication of the experiment not working. (low transfection rates etc). In some cases this may be justifiable, especially if the experiment is a difficult one. Overall though I suppose this constitutes the "accepted" gray area along with other 'minor" fudging of data. To get back to my dilemma: I am a Msc grad student in a molecular bio lab. I began to notice something off about the results of a fellow lab graduate (PhD). Sometimes I noticed that blots on her images looked remarkably similar (if I zoomed in the pixel pattern was identical). ALso there were cases where she used antibodies that simply should not work in the species she was using. (eg animal specific in a plant). i have concluded that while not all of her work is fraudulent, there are instances of outright data fabrication. Of course, it is hard to draw the line where the fraud ends. I am amazed at even how bad she was at fabricating data- it is visible to the naked eye and aI am amazed that it has been published! at any rate, I can see why one might go over the edge and give in to maipulating and creating data. It is the publish or perish atmosphere of the lab that has everyone pretty terrified. I can see how it originates. I feel sometimes that I am teetering on the edge of fudging data more thatn my guts feel comfortable with, but I would never outright fabricate stuff that takes serious cojones!!!

Anyhow, my point here is to ask what I should do. I am a lowly Msc student- I suppose it would nip my career in the bud. My supervisor would not publish anything form me (something is on the go, but stalled due to loose ends that I do not want to falsely tie up!!!) Oh well, so much for being a biologist.


Help!

Permalink to Comment

33. Iskren Azmanov on August 2, 2012 8:35 PM writes...

Can I have the mail address - for longer and private contact ? /Iskren Azmanov

Permalink to Comment

34. Iskren Azmanov on August 2, 2012 8:36 PM writes...

Can I have the mail address - for longer and private contact ? /Iskren Azmanov

Permalink to Comment

35. Iskren Azmanov on August 2, 2012 8:38 PM writes...

Can I have the mail address - for longer and private contact ? /Iskren Azmanov

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
One and Done
The Latest Protein-Protein Compounds
Professor Fukuyama's Solvent Peaks
Novartis Gets Out of RNAi
Total Synthesis in Flow
Sweet Reason Lands On Its Face
More on the Science Chemogenomic Signatures Paper
Biology Maybe Right, Chemistry Ridiculously Wrong