About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Making the Bacteria Make Your Fluorinated Compounds | Main | What To Do About Chemistry Papers That Don't Work? »

September 9, 2013

Exposing Faked Scientific Papers

Email This Entry

Posted by Derek

Chemistry World has a good article on the problem of shaky data in journal article, and the intersecting problem of what to do about it in the chemistry blogging world. Paul Bracher's ChemBark is, naturally, a big focus of the piece, since he's been highlighting some particularly egregious examples in the last couple of months (which I've linked to from here).

The phrase "witch hunt" has been thrown around by some observers, but I don't think that's fair or appropriate. In great contrast to the number of witches around (and their effectiveness), faked information in published scientific articles is very much a real thing, and can have real consequences. Time spent looking for it and exposing it is not time wasted, not when it's at its current levels. But who should be doing the looking and the exposing?

The standard answer is "Why, journal editors and reviewers, who shouldn't be letting this stuff past in the first place". Quite true. But in many cases, they are letting it past, so what should be done once it's published? A quiet, gentlemanly note to the editorial staff? Or a big blazing row in a public forum, such as a widely-read blog? Even though I don't start many of these myself, I come down more on the side of the latter. There are problems with that stance, of course - you have to be pretty sure that there's something wrong before you go making a big deal out of it, for one thing. Hurting someone else's reputation for no reason would be a bad thing, as would damaging your own credibility by making baseless accusations.

But in some of these recent cases, there's been little doubt about the problem. Take that nanorod paper: the most common result when I showed to to people was "Oh, come on." (And the most common result when I showed the famous "Just make up an elemental" paper to people was "Oh, (expletive)", with several common words all filling in appropriately). So if there's clearly trouble with a published paper, why is it such a good thing to make a big public spectacle out of it?

Deterrence. I really think that there will be less of this if people think that there's a reasonable chance that fake science will be exposed widely and embarrassingly. Covering up half your NMR spectrum with a box of digital white-out is fraud and people committing fraud have given up their opportunity to be treated with respect. And don't forget, the whole deterrence argument applies to editors and reviewers, too. I can guarantee that many chemists looked at these recent examples and wondered if they would have let these papers go through the review process, through carelessness or lack of time, and resolved to do better the next time. I certainly did.

That said, I do not intend to make this blog the full-time scourge of the chemical literature by patrolling the literature myself. If I see something suspicious, I'll speak up about it, and if other chemistry blogs (or readers) pick up on something, I'm very glad to hear about it or link to it. But finding these examples is a perfect example of something that I think is best left to the crowd. The person best equipped to discover a fraudulent paper is the person who is interested in its subject and would like to build on its results - in other words, the person who would be most harmed by it. And if someone fakes a paper, but no one ever reads it or refers to it, well, that's the author's own reward, and I hope that they enjoy it.

Comments (24) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature


1. DJ on September 9, 2013 8:28 AM writes...

Our lab recently tried to repeat a procedure from a TL methodology paper on the amination of amines to make protected hydrazines. We noticed that the entire paper is wrong since the authors did not analyze the product NMRs carefully, and incorrectly assumed that they were forming the desired hydrazines, rather than the rearranged alkoxyamines. It's a systemic error throughout the paper that renders the entire premise and methodology essentially useless. So our quandry is this: (1) This is clearly not a case of fraud, but just bad science. (2) Our discovery is not all that interesting on its own, since the unusual reaction pathway has little practical value, and (3) the original report is from a TL paper that probably not many people will read in the first place. Still, I am tempted to do SOMETHING to correct the public record, but what? Maybe contact the original authors (somewhere in India)? Try to write a 'rebuttal style' paper in the same journal? Any suggestions?

Permalink to Comment

2. DJ on September 9, 2013 8:36 AM writes...

...almost forgot: the paper was published only 6 months ago and has not been cited by any other papers as of yet (unsurprisingly). I'm thinking maybe an email to the editor would be an option as well. Of course, "give it rest - it's not worth it" is also running through my head.

Permalink to Comment

3. Project Osprey on September 9, 2013 8:56 AM writes...

The comments section in that article directed me towards PubPeer, which is a site I'd never previously heard of.
I like the general idea of it, but I'll have to see how well it actually works. I do have a paper in mind to try it out with.

Permalink to Comment

4. NMH on September 9, 2013 9:07 AM writes...

DJ -- a brief letter to the editor (that will be published) if the journal allows it. Many journals do

Permalink to Comment

5. John Wayne on September 9, 2013 9:12 AM writes...

Derek, I'm not sure what your policy is on obvious advertisements that aren't related to the topic, but I would strongly consider deleting current post 2 and flagging their IP address.

On topic to #1: Tough situation. My suggestion is to send the PI a very politely worded email informing them of the situation with an example of where they went wrong. This would give them an opportunity to do the right thing. If you happen to be personal friends with the editor at TL, I'd give them a heads up; otherwise, let the cards fall where they may.

Permalink to Comment

6. Anonymous on September 9, 2013 9:29 AM writes...

Perhaps we can ask Julian Assange to set up "Wikifraud" with a science section?

Permalink to Comment

7. Derek Lowe on September 9, 2013 9:43 AM writes...

John Wayne - it's gone. That's my policy on unsolicited advertisements! And good riddance.

Permalink to Comment

8. Chemjobber on September 9, 2013 10:24 AM writes...

@1: If the editors won't help, there's always PubPeer (

Permalink to Comment

9. opsomath on September 9, 2013 10:27 AM writes...

#1/DJ, I'd be pretty tempted to reproduce my own results, with careful characterization of the product mixtures, and publish it in the same journal showing that the original paper was wrong.

Permalink to Comment

10. anon the III on September 9, 2013 10:36 AM writes...

Why is covering up ether peaks in an NMR spectrum considered fraud, and yet selective reporting of substrates or yields is not? In both cases, you are deliberately hiding information, and I would argue that only in the latter case are you actually covering up anything relevant to your science.

Permalink to Comment

11. student on September 9, 2013 10:43 AM writes...

Anon III, the reason it's not as blatant a case of fraud is that there is no standard in the field. Everyone can agree that manipulated spectra is fraud. But journals do not have an explicit yield-reporting policy. If a journal had an actual policy that stated that yields must be an average yield of three runs (with each yield reported) and not just the highest yield ever achieved then it actually would be fraud to report the highest only, albeit a case of fraud that is harder to prove.

Permalink to Comment

12. RM on September 9, 2013 11:04 AM writes...

anon the III@10 - I think the main issue with covering up the spectra is not that they're hiding the data, but it's showing data that's not true.

Their spectrum shows the region from 1.0 to 1.5 and 3.25 to 3.5 with nothing in it. But there apparently *was* something in that region. If you show a flat baseline, you imply that there was a flat baseline.

This is sort of the case with protein gels on the bio side of things. Depending on what you're attempting to show (e.g. antibody reactivity), it's perfectly alright to cut off the rest of the gel and show just the bands of interest. You can even make a composite figure with multiple of these excised bands layered on top of each other, even for bands from multiple gels. What you can't do is merge these multiple excised bands to imply that they were run on the same gel, or paste the band into a blank background, or dodge/burn/smudge/erase other bands in a full gel that you do show.

Omitting things from papers is typically fine, as long as you're not implying anything from the omission. Altering or manipulating things you *do* show such that they aren't what they actually are is fraudulent. It's the difference between not reporting a yield and changing the numbers on a yield that you do report.

Permalink to Comment

13. misterscampers on September 9, 2013 11:33 AM writes...

Some of my recent publishing experience suggests that reviewers are looking at SI more scrupulously than in the past. For example, comments on my recent submission to an ACS journal included flagging a couple of cases where mass specs that should have been reported as [M + H]+ were in fact reported as [M + NH4]+ - not something a casual glance is likely to catch. I hope that this reflects a raising of standards and not just one persnickety reviewer.

Permalink to Comment

14. mord on September 9, 2013 12:59 PM writes...


Much is made of the added white boxes, but is there also an added flat baseline? I mean there ought to be, right, if the baseline appears where it oughtn't to be, but unless I missed it, people have only made much of the white boxes. Maybe the added flat baseline is more difficult to see.

Permalink to Comment

15. Philip (flip) Kromer on September 9, 2013 2:54 PM writes...

There's a good precedent for this in how Computer Security issues are disclosed. By a self-imposed convention, "white-hat" security researchers inform the source of the vulneraiibility first, and after a stated delay make their discovery public. This gives the software company time to make a patch, but still holds their feet to the fire. Before this convention, software companies would sit on the information or downplay it -- as we see journals do. Immediate disclosure is similarly unwise (risk of error in the journal case, exposure in the hacker case) and unfair to the apprised, who deserve a chance to put things in order.

Permalink to Comment

16. Sisyphus on September 9, 2013 4:44 PM writes...

It is amazing to me that Journals still accept low resolution scanned images (complete with the random residue of a pickle slice from lunch). In this age, it seems more reasonable to just upload the FIDs for NMR spectra. More difficult (yet not impossible) to fake or manipulate an FID.

Permalink to Comment

17. Saul-Goodman on September 9, 2013 5:30 PM writes...

I'm interested to know if there is a common way to prepare spectra for publication, that most people follow?

I found that when doing this for the first time, my colleagues all had their own preferences, and no one could point me to a particularly satisfying method.

Permalink to Comment

18. z on September 9, 2013 8:27 PM writes...

My concern is primarily with collateral damage to co-authors who did not do anything wrong.

In the "just make it up" case, one person (presumably the PI) asked the first author to commit fraud. She did not commit fraud (so far as we know). But some of the comments about that story took a guilty until proven innocent approach and dug into everything she had written, highlighting typos and other potential discrepancies (which might have reasonable explanations). And who among us, even the most detail-oriented person, hasn't had some honest errors creep into things we've published?

So what bothers me is when this kind of coverage paints these potentially innocent parties with the same brushstrokes used for people like Bengu Sezen. When people really deserve it, then, yes, drag them through the mud, but otherwise I would call for these commenters to be careful, indeed, to be more scientific--objectively evaluate evidence without emotionally jumping to conclusions. Then and only then apply blame to the parties who actually deserve the blame.

Permalink to Comment

19. Organometallica on September 9, 2013 10:08 PM writes...

Why can't we have a unified spectral database? For crystallography, we have a generally accessible repository for raw cif files and it even checks the cifs for common problems BEFORE publication! Not only does this help the publication quality, but it means that I can search for and download raw data to inspect and manipulate on my own. Not only would having this for NMR help reduce fraud, but it would be an incredible research tool!

Who wants to help me build it?

Permalink to Comment

20. Organometallica on September 9, 2013 10:09 PM writes...

Why can't we have a unified spectral database? For crystallography, we have a generally accessible repository for raw cif files and it even checks the cifs for common problems BEFORE publication! Not only does this help the publication quality, but it means that I can search for and download raw data to inspect and manipulate on my own. Not only would having this for NMR help reduce fraud, but it would be an incredible research tool!

Who wants to help me build it?

Permalink to Comment

21. Teukka on September 9, 2013 10:27 PM writes...

Just my $.02 worth on this issue in a general sense. I think there has been an increase (I am not in a position to quantify it, as I am not that heavy a reader of scientific papers) of papers lately, which don't seem to be right.
I'm not saying it's fraud, I've seen everything from opinion pieces masquerading as scientific reports, outright frauds as well as the (in my experience common) sloppy jobs.
Nevertheless, it irritates me immensely when I find them, because I know that there are people who depend on things being at least in the ballpark of being right, such their time and money being limited, but also because other research as well may hinge on it.
This is true of any field, not just chemistry (I'm into computers and electronics myself). Not to mention the safety aspect -- how long before an explosive (pardon the pun) error slips by? In my field the worst thing that can happen is the 'magic smoke' escaping from a circuit, but chemistry, results can get very interesting given the wrong elements (N, for example).
Just my $.02 worth, but I would try (in order) letter to those who put the paper together in the first place, then the journal that published it, PubPeer as pointed out above, all else failing and the error being verified and grave, opening up a research project in methodologies of raising a stink.

FWIW, in electronics it is customary to speedily post errata and corrections to designs, sometimes with a catchy header and pic (say, blown out circuit board with caption "So that didn't work as advertised..."). And in computing... Well... Flame wars have started for very small things...

Permalink to Comment

22. Anonymous on September 10, 2013 2:51 AM writes...

I am personally aware of an instance of faked data and plagarism which a reviewer picked up on. The issue was raised with the journal editor, and a formal complaint was made to the home institution(s) of the author(s). It turns out this was not the first paper to be faked by the(se) author(s) but was the first to be flagged. What has eventuated is a lengthy court proceeding.

Having seen the amount of energy and time being diverted from productive science (and the attempts to discredit the whistle-blower), I can see why many academic reviewers wouldn't want to stick their heads up over dodgy looking papers - due process might actually be considered a disincentive.

It would be much easier to find other ways to get the papers rejected and move on (in this case, they weren't exactly quality studies to begin with). Or if that doesn't work, to only trust papers from certain known sources and dismiss the more dubious looking claims. There is an incentive problem here, but I'm not sure how this could be addressed.

That said, I do hope the recent trend of exposing misconduct continues and agree with much of Derek's views on the role of bloggers and informed readers. Hopefully increased scrutiny will lead to better science all-round.

Permalink to Comment

23. Anonymous on September 10, 2013 5:37 AM writes...

Previously I set up a biotech company to develop drugs for AD, and we used 2 competitor compounds (Apan of Praecis and Alzhemed of Neurochem) in Phase I and Phase III clinical trials as positive controls for our in vitro assays. But neither we, nor any other group, could see any activity of either compound in any assays, so we ended up using them as negative controls! Not surprisingly, both of these compounds eventually failed, but only after spending/wasting hundreds of millions of dollars.

Permalink to Comment

24. MikeB on September 12, 2013 6:14 AM writes...

In the Mnova software (used in the Cossy paper) there are a number of ways to visually manipulate data: this is for the users' convenience and not intended as a vehicle to make published data look better than they really are.

Referees are the "police", and I firmly believe their job would be easier if the raw data were routinely made available in the SI. This should be a standard editorial policy for ACS, RSC, Elsevier, etc. In the interim, I would encourage synthetic chemists to also make these data available in resources such as ChemSpider synthetic.

Permalink to Comment


Remember Me?


Email this entry to:

Your email address:

Message (optional):

One and Done
The Latest Protein-Protein Compounds
Professor Fukuyama's Solvent Peaks
Novartis Gets Out of RNAi
Total Synthesis in Flow
Sweet Reason Lands On Its Face
More on the Science Chemogenomic Signatures Paper
Biology Maybe Right, Chemistry Ridiculously Wrong