Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

December 24, 2009

Holiday Time Off

Email This Entry

Posted by Derek

Well, with the holidays and all, I'll be taking blog-time off until Monday. The pace of scientific discovery has slowed noticeably around here in the last few days (and from the traffic stats, I can tell it's slowing in lots of other places, too!) As usual, I've made notes about what I'm up to in the lab, so I can pick up the threads in January. I also have an interesting manuscript that I'm submitting for publication, which I hope to talk about here soon. I'd like to wish all my readers who are celebrating a Merry Christmas!

Comments (18) + TrackBacks (0) | Category: Blog Housekeeping

December 23, 2009

An Alzheimer's Compound Runs Into Big Trouble

Email This Entry

Posted by Derek

Another interesting approach to Alzheimer's therapy has just taken a severe jolt in the clinic. Elan and Transition Therapeutics were investigating ELEND005, also known as AZD-103, which was targeted at breaking down amyloid fibrils and allowing the protein to be cleared from the brain.

Unfortunately, the two highest-dose patient groups experienced a much greater number of severe events - including nine deaths, which is about as severe as things get - and those doses have been dropped from the study. I'm actually rather surprised that the trial is going on at all, but the safety data for the lowest dose (250mg twice daily) appear to justify continuing. The higher doses were 1g and 2g b.i.d., and the fact that they were going up that high makes me think that the chances of success at the lowest dose may not be very good.

So what is this drug? Oddly enough, it's one of the inositols, the scyllo isomer. Several animal studies had shown improvements with this compound, and there were promising results for Parkinson's as well. At the same time, scyllo-inositol has been implicated as a marker of CNS pathology when it's found naturally, so it's clearly hard to say just what's going on. As it always is with the brain. . .

Comments (18) + TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | The Central Nervous System | Toxicology

December 22, 2009

GE Healthcare's Idiotic Libel Suit

Email This Entry

Posted by Derek

Courtesy of Pharmalot (and my mail!), I note this alarming story from London. GE Healthcare makes a medical NMR contrast agent, a gadolinium complex marketed under the name of Omniscan. (They picked it up when they bought Amersham a few years ago). Henrik Thomsen, a Danish physician had noted what may be an association with its use and a serious kidney condition, nephrogenic systemic fibrosis, and he gave a short presentation on his findings two years ago at a conference in Oxford.

For which GE is suing him. For libel. Update: the documents of the case can be found here. They claim that his conference presentation was defamatory, and continue to insist on damages even though regulatory authorities in both the UK and in the rest of Europe have reviewed the evidence and issued warnings about Omniscan's use in patients with kidney trouble. Over here in the US, the FDA had issued general advisories about contrast agents, but an advisory panel recently recommended that Omniscan (and other chemically related gadolinium complexes) be singled out for special warnings. From what I can see, Thomsen should win his case - I hope he does, and I hope that he gets compensatory damages from GE for wasting his time when he could have been helping patients.

And this isn't the only case going on there right now. Author Simon Singh is being sued by the British Chiropractic Association for claiming in a published article that chiropractic claims of being able to treat things like asthma as "bogus". Good for him! But he's still in court, and the end is not in sight.

This whole business is partly a function of the way that GE and the chiropractors have chosen to conduct business, but largely one of England's libel laws. The way things are set up over there, the person who brings suit starts out with a decided edge, and over the years plenty of people have taken advantage of the tilted field. There's yet another movement underway to change the laws, but I can recall others that apparently have come to little. Let's hope this one succeeds, because I honestly can't think of a worse venue to settle a scientific dispute than a libel suit (especially one being tried in London).

So, General Electric: is it now your company policy to sue people over scientific presentations that you don't like? Anyone care to go on record with that one?

Comments (40) + TrackBacks (0) | Category: Analytical Chemistry | Current Events | The Dark Side | Toxicology

December 21, 2009

Faking X-Ray Structures. . .For Fun? Or Profit? Or What?

Email This Entry

Posted by Derek

Well, this isn't good: an ex-researcher at the University of Alabama-Birmingham has been accused of faking several X-ray structures of useful proteins - dengue virus protease, taq polymerase, complement proteins from immunology, etc. There have been questions surrounding H. M. Krishna Murthy's work for at least a couple of years now (here's the reply to that one). The university, after an investigation, has decided that 11 of the published structures seem to have been falsified in some way and has asked that the papers be retracted and the structures removed from the Protein Data Bank.

The first controversy with these structures was, I think, the one deposited in the PDB as 2hr0. Here's a good roundup of what's wrong with it, for those of you into X-ray crystallography. And as that post makes clear, there were also signs that some other structures from this source had been suspiciously cleaned up a bit.

So how do you go about faking an X-ray, anyway? Here's some detail - basically, you could take something that's structurally related (from a protein standpoint) but crystallographically distinct, and use that as a starting point. As that post says, add some water and some noise, and "bingo". The official statement from UAB's investigation gives you the likely recipes for all eleven faked-up structures.

As for Dr. Murthy, he left UAB earlier this year, according to this article, and the university says that they have no current contact information for him. If these accusations are true, he's spent nearly ten years generating spurious analytical data. What, then, do you do with that skill set?

Comments (30) + TrackBacks (0) | Category: Analytical Chemistry | The Dark Side

December 18, 2009

Day Off!

Email This Entry

Posted by Derek

I'll be out doing non-scientific things all day, so the march of progress is going to have to lurch along without me for a bit. I'll see everyone on Monday!

And if you're in need of something to waste a bit of time on, this site will make your sense of order shudder: There, I Fixed It. It's a favorite around the Lowe household; I'm trying to turn my kids into the sorts of people who will never do any of those things.

Comments (5) + TrackBacks (0) | Category: Blog Housekeeping

December 17, 2009

Why Don't Chemists Communicate? (Or Do We?)

Email This Entry

Posted by Derek

There's a commentary in the December issue of Nature Chemistry asking why our field has been comparatively slow to adopt web-based technologies like arXiv and GenBank:

"New web-based models of scholarly communication have made a significant impact in some scientific disciplines, but chemistry is not one of them. . .why do similar initiatives in chemistry fail to gain critical mass and widespread usage?"

The article considers several possibilities - among others, that (a) other fields aren't actually quite as techno-webby as we think they are, or (b) there might be a mismatch between chemistry as a discipline and the current tools, one that isn't found in some other fields of science, or (c) that there could be just a few defined issues that need to be addressed, then things will take off, or (d) that chemists already have the communication tools that they need, anyway.

The authors point out that technical hurdles can probably be ruled out as an explanation, and in many cases they can also rule out "because no one's ever tried". Elsevier, for example, tried to get an arXiv-type preprint server going a few years ago, but that bombed pretty thoroughly (not least, I think, because people were naturally a bit suspicious of such an effort being launched under Elsevier's banner, and because the ACS journals refused to take manuscripts that had appeared there). Nature has been trying something similar in the last couple of years with Nature Precedings, but I'm not sure if it's taking off or not. I've never really used it myself, if that's a data point worth mentioning.

One key point that the authors make is that totally new means of communication don't just pop into existence in a scientific discipline. The ones that catch on tend to build on things that the scientists are already doing. I think that physicists, for example, were already more used to sharing preprints of articles, and that the arXiv server just helped them do that more easily. Chemists, on the other hand Just Don't Do That, so announcing to them that Now They Can! isn't enough to bring in participants.

On the same chemistry-is-different front, the commentary also notes that our field has always had an emphasis on making stuff, although they don't put it quite that way. The computer is not usually the machine that produces our results; it's just the means by which we keep track of them. And we don't generate the piles of (sometimes) reusable data that physicists do, so much as we generate new substances and new ways of making and using them. The data are there to show that we did, in fact, make what we said we made. Those piles of data also tend to hold their value much longer than in other fields, too - after all, a compound is a compound, and its NMR spectrum doesn't change. If you want to know how some class of compounds behave, a paper from fifty years ago (or more) can be a perfectly good place to look.

Also in contrast to the physics community, chemistry is broken up into many smaller units. You'll never see a chemistry paper with as many co-authors as a high-energy physics paper, because we don't have to run our experiments on the One Big Machine In the Whole World. It may be that parts of the physics world have basically been forced to collaborate more widely, because that's the only way to get anything done. We also have a wide range of sub-disciplines, what with physics on one side of us and biology on another, and these all have their own idiosyncracies. (And, of course, many of us work in areas where we basically can't share some information until we're good and ready to).

One thing that the whole article doesn't quite address though, is: what would these wonderful new communication modes be, actually? And how would they improve my research life? Electronic literature searching certainly has, as has the availability of journals online. Electronic notebooks definitely have. What else would? I'm sure that there must be a few things, but I find that some of the Web 2.0 info-heaven visions that people outside the field talk about don't do much to excite me. It's like seeing some scientific abstract online, and then noting the little row of social-media icons below it, inviting me to submit the thing to Digg, Reddit, or what have you. Or to go visit the journal's page on Facebook, of all things. Why I'd do that is something I haven't quite figured out yet.

But hey, I'm not as much of a Luddite as that makes me sound. I also note this passage from the article (emphasis mine):

An increasing number of scientists have adopted blogging as a means of informal communication. Typicall, the writing style of blogs is conversational, and humorous content gets mixed with posts of a more serious tone. Some blogs are dedicated to educating lay audiences, others aim at an academic discussion, and many are like personal diaries. At this point in time, many science bloggers are assumed to be less than 30 years old, and are primarily journalists, teachers, graduate students, or young researcher. Hardly any established scientists maintain a blog - after all, blogging regularly is very time-consuming. The question remains open whether these will remain fringe phenomena or become part of the mainstream communication in science.

Comments (44) + TrackBacks (0) | Category: The Scientific Literature

December 16, 2009

Pass the Popcorn

Email This Entry

Posted by Derek

Year-end rushing around has left me little time for blogging last night or this morning. But a discussion with a colleague the other day leads me to ask a quick question of the readership: has there ever, in your view, been a realistic depiction of a research chemist in some sort of popular entertainment (TV, movie, reasonably-selling novel)? I'm hard-pressed to think of many examples myself, partly because what we do isn't (a) all that easy to explain in a dramatic setting, and (b) tends to operate with non-dramatic pacing, to put it mildly. But I'd be glad to hear some suggestions. . .

Comments (66) + TrackBacks (0) | Category: General Scientific News

December 15, 2009

Manfred Christl Rides Again (Bonus Idiotic Lab Accident, Too)

Email This Entry

Posted by Derek

Readers may remember the incident a couple of years ago where a paper was published claiming the synthesis of some very odd-looking 12-membered ring compounds. Prof. Manfred Christl of the University of Würzburg noticed something odd about this reaction, though, namely that it had already been run over a hundred years ago and was known to give a completely different product. (As I pointed out here, though, you didn't need to unearth the ancient literature to know this; ten minutes of looking through the modern stuff would have done the trick, too).

Well, Christl's back with another takedown of some improperly assigned weirdo 12-membered rings. This time, it's Cheryl Stevenson of Illinois State that gets the treatment, with this paper from last year that claims several interesting ring structures from 1,5-hexadiyne and base. Christl had trouble believing the mechanism, and on closer inspection had trouble believing the NMR assignments. Then, on even closer inspection, he assigned the structure as a simple isomerization of one of the triple bonds, and found that this exact reaction (and product) had first been reported in 1961 (and several times afterwards). Not good.

As it turns out, I almost certainly made some of the compound myself, by mistake, back in mid-1983. That was the summer before I started my first year of grad school, and I was doing work in Ned Porter's lab at Duke. One of the starting materials I needed was. . .1,5, hexadiyne, which you couldn't buy. So I made it, in real grad-school fashion. I homocoupled allyl Grignard to get the 1,5-hexadiene (which even if you could buy back then, we didn't). Then I reacted that with bromine and made the only six-carbon molecule with four bromines on it that I ever hope to make. Reacting that with freshly prepared sodium amide in ammonia gave the smelly di-yne, in crappy yield after distillation. I can still see it: me heating up a column full of glass beads, then turning to Steve, the postdoc in the next hood, and making a bad joke about Herman Hesse while David Bowie's "Modern Love" played on the radio. . .ah, those days, they will not come again.

At any rate, I went on to react the compound with bases under different conditions, trying (in vain) to alkylate both of those terminal alkynes, and thus passed the last of my summer, in exactly the way my two previous summers of research had passed: unsuccessfully. This latest paper, though, makes me think that I was probably turning my starting material instead into exactly the diene that Christl is talking about. I should have hit the library harder myself, although (to be fair) there are references that tell you that you can do that alkylation, and digging through the literature was a good deal more time-consuming back then that it is now.

That lab accident, you say? Well, that happened when I was making a big batch of sodium amide. You start that prep off like a Birch reduction - condense a bunch of liquid ammonia into a flask, and start chucking sodium metal into it. The big difference is that you add a bit of ferric chloride to the mix, which kicks things over at the end. After you've dissolved your sodium, to give you the bronzy purple-blue of solvated electrons, you take the flask out of the cold bath to let the ammonia reflux. At that point, the whole thing suddenly clears, dramatically revealing grey lumps of sodium amide rolling around on the bottom of a pond of plain ol' ammonia, without a solvated electron in sight. (I have, in years since, seen a couple of people refer to the blue stage of the reaction as sodium amide, which it ain't, and I can get quite cranky and pedantic about it).

One afternoon I was whipping up a batch of this stuff, when something starting going on inside the flask. I don't recall what made me take a look at the ammonia solution, but since there was so much bronze gorp on the side of the one-liter three-neck, I had to lean in and look down near the central joint. Whereupon my hair wound itself immediately around the greased shaft of the overhead stirrer, pulling my head in toward the whole setup and jamming my nose into the side of the flask. I fumbled for the switch of the stir motor, feeling like George Jetson as I shouted for someone to give me a hand, and watch with interest as the dry ice bath bubbled along an inch away from my face.

Steve the postdoc came to my aid, shutting off the grinding motor which was doggedly trying to wind me headfirst around the stirrer shaft. We unreeled my hair from the whole contraption, with me cursing foully and Steve merrily making jokes of the "Hair today, gone tomorrow" kind, with side comments about me getting too wrapped up in my work. Those days, as I said, will not come again.

Comments (35) + TrackBacks (0) | Category: How Not to Do It | The Scientific Literature

December 14, 2009

The Cost of New Drugs

Email This Entry

Posted by Derek

I'm continuing my look at Bernard Munos' paper on the drug industry, which definitely repays further study (previous posts here, here, and here). Now for some talk about money - specifically, how much of it you'll need to find a new drug. The Munos paper has some interesting figures on this question, and the most striking figure is that the cost of getting a drug all the way to the market has been increasing at an annual rate of 13.4% since the 1950s. That's a notoriously tough figure to pin down, but it is striking that the various best estimates of the cost make an almost perfectly linear log plot over the years. We may usefully contrast that with the figures from PhRMA that indicate that large-company R&D spending has been growing at just over 12% per year since 1970. Looking at things from that standpoint, we've apparently gotten somewhat more efficient at what we do, since NME output has been pretty much linear over that time.

But that linear rate of production allows Munos to take a crack at a $/NME figure for each company on his list, and he finds that less than one-third of the industry has a cost per NME of under $1 billion dollars, and some of them are substantially more. Of course, not every NME is created equal, but you'd have to think that there are large potential for mismatches in development cost versus revenues when you're up at these levels. Munos also calculates that the chance of a new drug achieving blockbuster status is about 20%, and that these odds have also remained steady over the years - this despite the way that many companies try to skew their drug portfolios toward drugs that could sell at this level.

How much of these costs are due to regulatory burden? A lot, but for all the complaining that we in the industry do about the FDA, they may, in the long run, be doing us a favor. Citing these three studies, Munos says that:

. . .countries with a more demanding regulatory apparatus, such as the United States and the UK, have fostered a more innovative and competitive pharmaceutical industry. This is because exacting regulatory requirements force companies to be more selective in the compounds that they aim to bring to market. Conversely, countries with more permissive systems tend to produce drugs that may be successful in their home market, but are generally not sufficiently innovative to gain widespread approval and market acceptance elsewhere. This is consistent with studies indicating that, by making research more risky, stringent regulatory requirements actually stimulate R&D investment and promote the emergence of an industry that is research intensive, innovative, dominated by few companies and profitable.

But this still leaves us with a number of important variables that we don't seem to be able to push much further - success rates in the clinic and in the marketplace, money spent per new drug, and so on. And that brings up the last part of the paper, which we'll go into next time: what is to be done about all this?

Comments (17) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

December 11, 2009

Another Take on the Munos Paper

Email This Entry

Posted by Derek

Eric Milgram over at PharmaConduct has an excellent post up on the same paper I've been discussing this morning. As another guy who's been around the block a few times in this industry, he's struck by many of the same points I am (to the point of also linking to Wikepedia's page on Poisson distributions!)

And he has some interesting data of his own to present, too - well worth checking out.

Comments (10) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

Munos On Big Companies and Small Ones

Email This Entry

Posted by Derek

So that roughly linear production of new drugs by Pfizer, as shown in yesterday's chart, is not an anomaly. As the Bernard Munos article I've been talking about says:

Surprisingly, nothing that companies have done in the past 60 years has affected their rates of new-drug production: whether large or small, focused on small molecules or biologics, operating in the twenty-first century or in the 1950s, companies have produced NMEs at steady rates, usually well below one per year. This characteristic raises questions about the sustainability of the industry's R&D model, as costs per NME have soared into billions of dollars.

What he's found, actually, is the NME generation at drug companies seems to follow a Poisson distribution, which makes sense. This behavior is found for systems (like nuclear decay in a radioactive sample) where there are a large number of possible events, but where individual ones are rare (and not dependent on the others). A Poisson process also implies that there's some sort of underlying average rate, and that the process is stochastic - that is, not deterministic, but rather with a lot of underlying randomness. And that fits drug development pretty damned well, in my experience.

But that's just the sort of thing, as I've pointed out, that the business-trained side of the industry doesn't necessarily want to hear about. Modern management techniques are supposed to quantify and tame all that risky stuff, and give you a clear, rational path forward. Yeah, boy. The underlying business model of the drug industry, though, as with any fundamentally research-based industry, is much more like writing screenplays on spec or prospecting for gold. You can increase your chances of success, mostly by avoiding things that have been shown to actively decrease them, and you have to continually keep an eye out for new information that might help you out. But you most definitely need all the help you can get.

As that Pfizer chart helps make clear, Munos is particularly not a fan of the merge-your-way-to-success idea:

Another surprising finding is that companies that do essentially the same thing can have rates of NME output that differ widely. This suggests there are substantial differences in the ability of different companies to foster innovation. In this respect, the fact that the companies that have relied heavily on M&A tend to lag behind those that have not suggests that M&A are not an effective way to promote an innovation culture or remedy a deficit of innovation.

In fact, since the industry as a whole isn't producing noticeably more in the way of new drugs, he suggests that one possibility is that nothing we've done over the last 50 years has helped much. There's another explanation, though, that I'd like to throw out, and whether you think it's a more cheerful one is up to you: perhaps the rate of drug discovery would actually have declined otherwise, and we've managed to keep it steady? I can argue this one semi-plausibly both ways: you could say, very believably, that the progress in finding and understanding disease targets and mechanisms has been an underlying driver that should have kept drug discovery moving along. On the other hand, our understanding of toxicology and our increased emphasis on drug safety have kept a lot of things from coming to the market that certainly would have been approved thirty years ago. Is it just that these two tendencies have fought each other to a draw, leaving us with the straight lines Munos is seeing?

Another important point the paper brings up is that the output of new drugs correlates with the number of companies, better than with pretty much anything else. This fits my own opinions well (therefore I think highly of it): I've long held that the pharmaceutical business benefits from as many different approaches to problems as can be brought to bear. Since we most certainly haven't optimized our research and development processes, there are a lot of different ways to do things, and a lot of different ideas that might work. Twenty different competing companies are much more likely to explore this space than one company that's twenty times the size. Much of my loathing for the bigger-bigger-bigger business model comes from this conviction.

In fact, the Munos paper notes that the share of NMEs from smaller companies has been growing, partly because the ratio of big companies to smaller ones has changed (what with all the mergers on the big end and all the startups on the small end). He advances several other possible reasons for this:

It is too early to tell whether the trends of the past 10 years are artefacts or evidence of a more fundamental transformation of the drug innovation dynamics that have prevailed since 1950. Hypotheses to explain these trends, which could be tested in the future, include: first, that the NME output of small companies has increased as they have become more enmeshed in innovation networks; second, that large companies are making more detailed investigations into fundamental science, which stretch research and regulatory timelines; and third, that the heightened safety concerns of regulators affect large and small companies differently, perhaps because a substantial number of small firms are developing orphan drugs and/or drugs that are likely to gain priority review from the FDA owing to unmet medical needs.

He makes the point that each individual small company has a lower chance of delivering a drug, but as a group, they do a better job for the money than the equivalent large ones. In other words, economies of scale really don't seem to apply to the R&D part of the industry very well, despite what you might hear from people engaged in buying out other research organizations.

In other posts, I'll look at his detailed analysis of what mergers do, his take on the (escalating) costs of research, and other topics. This paper manages to hit a great number of topics that I cover here; I highly recommend it.

Comments (41) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

December 10, 2009

Pfizer's R&D Productivity

Email This Entry

Posted by Derek

Courtesy of Bernard Munos, author of the Nature Reviews article that I began blogging about yesterday, comes this note about Pfizer's track record with new molecules. His list of Pfizer NMEs since 2000 is Geodon (ziprasidone, 2001), Vfend (voriconazole, 2002, from Vicuron - whoops, not so, this one's Pfizer's), Relpax (eletriptan, 2002), Somavert (pegvisomant, 2003, from Pharmacia & Upjohn), Lyrica (pregabalin, 2004, from Warner Lambert), Sutent (sunitinib, 2006, from Sugen/Pharmacia), Chantix (varenicline, 2006), Selzentry (maraviroc, 2007), and Toviaz (fesoterodine, 2008, from Schwarz Pharma). There are some good drugs on that list, but considering that even just five years ago, the company was claiming that it had 101 NMEs in development, and was going to file 20 NDAs by now, it might seem a bit thin.
Pfizer%20graph%20fixed.jpg

It might especially seem that way when you look over this graph, also provided by Munos (but not used in his recent article). You can see that Pfizer's R&D spending has nearly tripled since the year 2000, but that cumulative NME line doesn't seem to be bending much. And, as Munos points out, two (and now three) productive research organizations have been taken out along the way to produce these results. It is not, as they say, a pretty picture.

Comments (49) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

Selective Scaffolds

Email This Entry

Posted by Derek

We spend a lot of time in this business talking about molecular scaffolds - separate chemical cores that we elaborate into more advanced compounds. And there's no doubt that such things exist, but is part of the reason they exist just an outcome of the way chemical research is done? Some analysis in the past has suggested that chemical types get explored in a success-breeds-success fashion, so that the (over)representation of some scaffold might not mean that it has unique properties. It's just that it's done what's been asked of it, so people have stuck with it.

A new paper in J. Med. Chem. from a group in Bonn takes another look at this question. They're trying to see if the so-called "privileged substructures" really exist: chemotypes that have special selectivity for certain target classes. Digging through a public-domain database (BindingDB), they found about six thousand compounds with activity toward some 259 targets. Many of these compounds hit more than one target, as you'd expect, so there were about 18,000 interactions to work with.

Isolating structural scaffolds from the compound set and analyzing them for their selectivity showed some interesting trends. They divide the targets up into communities (kinases, serine proteases, and so on), and they definitely find community-selective scaffolds, which is certainly the experience of medicinal chemists. Inside these sets, various scaffolds also show tendencies for selectivity against individual members of the community. Digging through their supporting information, though, it appears that a good number of the most-selective scaffolds tend to come from the serine protease community (their number 3), with another big chunk coming from kinases (their number 1a). Strip those (and some adenosine receptor ligands and DPP inhibitors, numbers 11 and 8) out, and you've taken out all the really eye-catching selectivity numbers out of their supplementary table S5. So I'm not sure that they've identified as many hot structures as one might think.

Another problem I have, when I look at these structures, is that a great number of them look too large for any useful further development. That's just a function of the data this team had to start with, but this gets back to the question of "drug-like" versus "lead-like" structures. I have a feeling that too many of the compounds in the BindingDB set are in the former category, or even beyond, which skews things a bit. Looking at a publication on it from 2007, I get the impression that a majority of compounds in it have a molecular weight greater than 400, with a definite long tail toward the higher weights. What medicinal chemists would like, of course, is a set of smaller scaffolds that will give them a greater chance of landing in a selective chemical space that can be developed. Some of the structures in this paper qualify, but definitely not all of them. . .

Comments (6) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico

December 9, 2009

Drug Companies Since 1950

Email This Entry

Posted by Derek

There's a data-rich paper out in Nature Reviews Drug Discovery on the history of drug innovation in the industry. I'll get to its real conclusions in another upcoming post, but some of the underlying data are worth a post of their own.

The author (Bernard Munos of Lilly) looks at new drug approvals (NMEs) since 1950, and finds:

At present, there are more than 4,300 companies that are engaged in drug innovation, yet only 261 organizations (6%) have registered at least one NME since 1950. Of these, only 32 (12%) have been in existence for the entire 59-year period. The remaining 229 (88%) organizations have failed, merged, been acquired, or were created by such M&A deals, resulting in substantial turnover in the industry. . .Of the 261 organizations, only 105 exist today, whereas 137 have disappeared through M&A and 19 were liquidated.

At the high end of the innovation scale, 21 companies have produced half of all the NMEs that have been approved since 1950, but half of these companies no longer exist. . .Merck has been the most productive firm, with 56 approvals, closely followed by Lilly and Roche, with 51 and 50 approvals, respectively. Given that many large pharmaceutical companies estimate they need to produce an average of 2–3 NMEs per year to meet their growth objectives, the fact that none of them has ever approached this level of output is concerning.

Indeed it is - either those growth targets are unrealistic, or the number of new drugs thought to be needed to support them has been overestimated, or we're all in some trouble. Speculation welcomed - I lean toward the growth targets being hyped up to please investors, but I'm willing to be persuaded.

And the fact that most of the new drugs come from a much smaller list of companies should be no surprise - that looks like a perfect example of a power law (aka "long tail") effect. Given the way research works, I'd actually be surprised if it were any other way.Now about that figure of 4,300 companies, though: what could possibly be on it? All sorts of startups that I've never heard of, of course - but how can that account for such a large number?

It appears to come from this PDF, where it appears on slide 114 (whew). There's no listing, just a breakdown of 1450 companies in the US, 450 in Canada, 1600 in Europe, and 740 in the Asia/Pacific region. Anyone want to hazard any guesses about how real those numbers are?

Comments (18) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

Water and Proteins Inside Cells: Sloshing Around, Or Not?

Email This Entry

Posted by Derek

Back in September, talking about the insides of cells, I said:

There's not a lot of bulk water sloshing around in there. It's all stuck to and sliding around with enzymes, structural proteins, carbohydrates, and the like. . ."

But is that right? I was reading this new paper in JACS, where a group at UNC is looking at the NMR of fluorine-labeled proteins inside E. coli bacteria. (It's pretty interesting, not least because they found that they can't reproduce some earlier work in the field, for reasons that seem to have them throwing their hands up in the air). But one reference caught my eye - this paper from PNAS last year, from researchers in Sweden.

That wasn't one that I'd read when it came out - the title may have caught my eye, but the text rapidly gets too physics-laden for me to follow very well. The UNC folks appear to have waded through it, though, and picked up some key insights which otherwise I'd have missed. The PNAS paper is a painstaking NMR analysis of the states of water molecules inside bacterial cells. They looked at both good ol' E. coli and at an extreme halophile species, figuring that that one might handle its water differently.

But in both cases, they found that about 85% of the water molecules had rotational states similar to bulk water. That surprises me (as you'd figure, given the views I expressed above). I guess my question is "how similar?", but the answer seems to be "as similar as we can detect, and that's pretty good". It looks like all the water molecules past the first layer on the proteins are more or less indistinguishable from plain water by their method. (No difference between the two types of bacteria, by the way). And given that the concentration of proteins, carbohydrates, salts, etc. inside a cell is rather different than bulk water, I have to say I'm at a loss. I wonder how different the rotational states of water are (as measured by NMR relaxation times) for samples that are, say, 1M in sodium chloride, guanidine, or phosphate?

The other thing that struck me was the Swedish group's estimate of protein dynamics. They found that roughly half of the proteins in these cells were rotationally immobile, presumably bound up in membranes or in multi-protein assemblies. It's been clear for a long time that there has to be a lot of structural order in the way proteins are arranged inside a living cell, but that might be even more orderly than I'd been picturing. At any rate, I may have to adjust my thinking about what those environments look like. . .

Comments (8) + TrackBacks (0) | Category: Analytical Chemistry | Biological News

December 8, 2009

What The Hey? (Abstract Abstracts, Part II)

Email This Entry

Posted by Derek

This one gets my Least Informative Abstract award - it crossed my RSS feed recently, and while it may be a very interesting paper, there's no way to tell from this illustration. The text below it, which shows up on the actual ACS abstract page (but not on the RSS feed) is somewhat more informative, and Zewail's name alone is enough to make me take a look at the full paper. But that illustration. . .to think that I was complaining about a bunch of colored dots and the way Nicolaou colors the inside of his aromatic rings. . .ah, those were simpler times.

Comments (8) + TrackBacks (0) | Category: The Scientific Literature

Another Blogroll Update

Email This Entry

Posted by Derek

Time for some more useful sites! In the blog category, welcome to Intermolecular, Natural Product Man, Experimental Error. And in the chemistry data section, I've added Synthetic Pages, Not Voodoo, and the Organic Chemistry Portal. And finally, I'd inexplicably left Ben Goldacre's Bad Science
off the blogroll, even though I've linked to it several times, so I've fixed that. Enjoy!

Comments (2) + TrackBacks (0) | Category: Blog Housekeeping

Pfizer's Pearl River Layoffs

Email This Entry

Posted by Derek

Pfizer's rounds of layoffs after the Wyeth merger are continuing, and look to go on for some time. A reader in New York state sends along word that there's been some controversy over the cuts at the Pearl River site. New York law requires a company to give both the state (and employees) 90 days notice if it lays off more than a set number or percentage of its staff. Pfizer's definitely over both limits, but according to the local newspaper (the Times Herald-Record), employees were told that the law didn't apply to them.

One Pearl River employee, whose identity was confirmed by the Times Herald-Record and who was granted anonymity, said company representatives told the laid-off employees the WARN law didn't apply to them. That source expressed concern that Pfizer was intentionally laying off small pockets of people to skirt WARN.

Now the paper (taking credit for the change) reports that Pfizer has indeed filed with the state that 200 employees will be let go in March. The paper has heard that a total of about 600 people will be laid off, although there are no state papers filed to cover that number yet.

Comments (9) + TrackBacks (0) | Category: Business and Markets | The Dark Side

December 7, 2009

Why Don't We Have More Protein-Protein Drug Molecules?

Email This Entry

Posted by Derek

Almost all of the drugs on the market target one or more small-molecule binding sites on proteins. But there's a lot more to the world than small-molecule binding sites. Proteins spend a vast amount of time interacting with other proteins, in vital ways that we'd like to be able to affect. But those binding events tend to be across broader surfaces, rather than in well-defined binding pockets, and we medicinal chemists haven't had great success in targeting them.

There are some successful examples, with a trend towards more of them in the recent literature. Inhibitors of interactions of the oncolocy target Bcl are probably the best known, with Abbott's ABT-737 being the poster child of the whole group.

But even though things seem to be picking up in this area, there's still a very long way to go, considering the number of possible useful interactions we could be targeting. And for every successful molecule that gets published, there are surely an iceberg of failed attempts that never make the literature. What's holding us back?

A new article in Drug Discovery Today suggests, as others have, that our compound libraries aren't optimized for finding hits in such assays. Given that the molecular weights of the compounds that are known to work tend toward the high side, that may well be true - but, of course, since the amount of chemical diversity up in those weight ranges is ridiculously huge, we're not going to be able to fix the situation through brute-force expansion of our screening libraries. (We'll table, for now, the topic of the later success rate of such whopper molecules).

Some recent work has suggested that there might be overall molecular shapes that are found more often in protein-protein inhibitors, but I'm not sure if everyone buys into this theory or not. This latest paper does a similar analysis, using 66 structurally diverse protein-protein inhibitors (PPIs) from the literature compared to a larger set (557 compounds) of traditional drug molecules. The PPIs tend to be larger and greasier, as feared>. They tried some decision-tree analysis to see what discriminated the two data sets, and found a shape description and another one that correlated more with aromatic ring/multiple-bond count. Overall, the decision tree stuff didn't shake things down as well as it does with data sets for more traditional target classes, which doesn't come as a surprise, either.

So the big questions are still out there: can we go after protein-protein targets with reasonably-sized molecules, or are they going to have to be big and ugly? And in either case, are there structures that have a better chance of giving us a lead series? If that's true, is part of the problem that we don't tend to have such things around already? If I knew the answers to these questions, I'd be out there making the drugs, to be honest. . .

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Industry History | In Silico

Once You Have Paid Him the Danegeld. . .

Email This Entry

Posted by Derek

. . .you never get rid of the Dane. (The rest of the poem, if you haven't come across it.)

Comments (11) + TrackBacks (0) | Category: Current Events

December 4, 2009

Caloric Restriction and Lifespan - Without the Caloric Restriction?

Email This Entry

Posted by Derek

If there's one thing that study-of-aging researchers can agree on, it's that caloric restriction seems to prolong lifespan in a number of different organisms. The jury is still out on whether this extends all the way up to humans - people are giving it a try, with varying degrees of dedication and experimental rigor, but it takes quite a while for the results to come in.

One thing that stands out from experiments in small organisms is that cutting back on food intake seems to increase lifespan at the expense of fertility. That makes sense in a sort of three-laws-of-robotics way: the first task is to survive. The second task is to reproduce, as long as that doesn't interfere too much with survival. . .so under very tight energy restrictions, the organism doesn't have enough overhead to move on to the reproduction side of things. (On the other hand, under abundant food conditions, it may be that for some organisms reproduction moves up into first place, depending on what kind of ecological niche they're trying to fill).

This usual thinking here has been that it's total availability of food that throws these switches, through pathways that are sensitive to metabolic flux. There's now a paper out in Nature that makes this model harder to stick with. The researchers look at fruit flies, Drosophila, the very pest that I'm trying to evict from my kitchen at home (thanks to a recent contaminated package of plantains). As it turns out, it's known that these flies don't eat fruit so much as they eat yeast, which accounts for their attraction to bread, vinegar, beer, and overripe produce. This paper tries to pin down which nutrients, exactly, in yeast have effects on fecundity and lifespan, and whether they really are mutually exclusive.

A good way to search for those effects is to take a population of calorically-restricted fruit flies and add nutrients back to their diet to see if anything shows up in lifespan or egg-laying behavior. Vitamins, lipids, and carbohydates were soon ruled out as entire classes - none of the ones found in yeast seemed to have much of an effect either way when they were added back to the diet. That's an interesting result right there - the flies were now getting more food, but their lifespans did not decrease, suggesting that it's not just calories per se that have the effect.

That leaves proteins, and their constituent amino acids. And there things started to get interesting. Adding an amino acid mixture recapitulated the effect of full feeding: lifespans went down, and reproduction went back up. After looking for possible general non-nutritional effects of amino acids (effects on pH, osmotic strength of the food solution, and so on - nothing meaningful found), the team then narrowed things down, trying mixtures of the ten amino acids that are known to be essential for Drosophila versus the ten that aren't. (It's pretty much the same list as for humans, actually).

Adding back the non-essential ones slightly decreased lifespan, with no effect on reproduction. Adding back the essential amino acids (EAAs), though, had substantial effects on both. Now things are getting close to the payoff: amino acids seem to be behind basically all of the caloric restriction effect, and the ten essential ones account for almost all of that. What about looking at them one by one? (I really love science, I have to tell you).

I'll take you right to the end, although plenty of experimentation was needed to get there: it comes down to methionine. Tryptophan has some effect, but methionine alone is sufficient to bring reproduction back to the levels seen in full feeding when you start with calorically restricted flies that are getting the other essential amino acids. It works in a dose-dependent manner, too: if you take restricted-nutrient flies and start putting methionine back into their diet, the fecudity comes up in tandem, eventually plateauing out to a level that you can only raise by giving them more of the other essential amino acids (which are presumably now the things in short supply). That makes it seems as if methionine isn't some signal that it's time to lay eggs - its effects depend on the concentration of the other nine essential amino acids.

Now here's the really neat part: adding methionine back to the diet did not decrease lifespan. So lifespan and reproduction are not always coupled. I'll let the authors lay it out (I've stripped out the references to other papers and to figures that are found in the original text):

Adding back each EAA individually did not decrease lifespan, although, again, methionine alone increased fecundity. Adding back all EAAs except methionine restored lifespan to the level corresponding to dietary restriction, whereas omission of tryptophan had no effect. Notably, restriction of methionine alone also increases lifespan in rodents. Methionine thus acts in combination with one or more other EAAs to shorten lifespan with full feeding. Full feeding thus increases fecundity and decreases lifespan through the effects of different nutrients in Drosophila, the fecundity increase through methionine alone and the lifespan decrease through a combination of methionine and other EAAs. There is thus an imbalance in the ratio of amino acids in yeast relative to the ratio the fly requires for the high fecundity from full feeding, and some consequence of this imbalance decreases lifespan. . .

. . .The mechanisms that influence lifespan are conserved over the large evolutionary distances between invertebrates and mammals, and our results hence imply that in mammals also the benefits of dietary restriction for health and lifespan may be obtained without impaired fecundity and without dietary restriction itself, by a suitable balance of nutrients in the diet.

Now that's going to set off the nutrional supplement industry, for sure, although the lack of effect of vitamins and various lipids will put a crimp into some sections of it. But I find this a fascinating result, and believe that it's probably only the beginning of a long, interesting, and important field of study.

Comments (30) + TrackBacks (0) | Category: Aging and Lifespan

December 3, 2009

All Of You Industrial Scientists: Out Of the Room

Email This Entry

Posted by Derek

Continuing Education (CE) is a big issue in many medical fields and those associated with them. Licensing boards and professional societies often require proof that people are keeping up with current developments and best practices, which is a worthy goal even if arguments develop over how well these systems work.

And it's also been a battleground for fights over commercial conflicts of interest. On the one hand, no one needs a situation where a room full of practitioners sits down to a blatant sales pitch that nonetheless counts as continuing education. But one the other hand, you have the problem that's now developing thanks to new policies by the Accreditation Council for Continuing Medical Education (ACCME) and the Accreditation Council for Pharmacy Education (ACPE). Thanks to a reader, I'm reproducing below some key parts of a letter that one professional organization, the American Society for Clinical Pharmacology and Therapeutics, has recently sent out to its members:

In 2006, ACCME and ACPE adopted new accreditation policies that went into effect in January 2009. Most concerning of these new policies is the requirement that CE providers develop activities/education interventions independent of any commercial interest, including presentation by industry scientists. This requirement greatly impacts the Society as industry scientists constitute nearly 50% of our membership and contribute significantly to the scientific programming of the ASCPT Annual Meeting. . .

ASCPT has been left with two options: 1) stop providing CE credit and continue to involve scientists from industry in the scientific program of the Annual Meeting; or 2) continue providing CE credit and remove all industry scientists from the program and planning process. . .

They go on to say that this year's meeting, having already been planned in the presence of Evil Industry Contaminators (well, they don't quite say it like that), will have no CE component, and that they don't see how they'll be able to have any such in the future, since they can't very well keep half the membership from presenting their work. This is definitely a problem for a number of professional organization, particularly the ones that deal with clinical research. They intersect with the professions that tend to have continuing education requirements, but a significant part of the expertise in their fields is found in industry. The ASCPT is not the only society facing this same dilemma.

It looks as if the accreditation groups decided that they were faced with a choice: commit themselves to judging what sorts of presentations should count for CE credit (which you might think was their job), or just toss out anything that has any connection with industry. That way you can look virtuous and save time, too. My apologies if I'm descending into ridicule here, but as an industrial scientist I find myself resenting the implication that my hands (and those of every single one of my colleagues) are automatically considered too dirty to educate any practicing professionals.

To be fair, this could well be one of those situations that the industry has helped bring on itself. I've no doubt that the CME process has probably been abused in the past. (Update: see the comments section. Am I being too delicate in this phrasing? Probably comes from never having dealt much with the marketing side of the business. . .) But there has to be some way to distinguish the old-fashioned "golf-resort meeting" from a clinical pharmacologist delivering a paper on new protocols for trial designs. The last thing we need is to split the scientific community even more than it's split already.

Comments (14) + TrackBacks (0) | Category: Academia (vs. Industry) | Clinical Trials | Drug Development

December 2, 2009

Copyright 1671: I Like the Sound of That

Email This Entry

Posted by Derek

Thanks to the Royal Society, here's the sort of scientific paper that they just don't make any more: "A Letter of Mr. Isaac Newton, Professor of the Mathematicks at the University of Cambridge, Containing His New Theory About Light and Colors". Along the way, in between making fundamental observations about refraction, rainbows, white light, complementary colors, and human perception, he invents the reflecting telescope that I take out into my yard on clear nights.

Newton was the Real Deal if anyone ever was. Like Bernoulli, you may recognize the lion by his paw.

Comments (20) + TrackBacks (0) | Category: General Scientific News

Data, Raw and Otherwise

Email This Entry

Posted by Derek

Perhaps I should talk a bit about this phrase "raw data" that I and others have been throwing around. For people who don't do research for a living, it may be useful to see just what's meant by that term.

As an example, I'll use some work that I was doing a few years ago. I had an reaction that was being run under a variety of conditions (about a dozen different ways, actually), but in each case was expected to either do nothing or produce the same product molecule. (This was, as you can see, a screen to see which conditions did the best job at getting the reaction to work). I set this up in a series of vials, taking care to run everything at the same concentration and to start all the reactions off at as close to the same time as I could manage.

After a set period, the reaction vials were all analyzed by LC/MS, a common (and extremely useful) analytical technique. I'd already given the folks running that machine a sample of the known product I was looking for, and they'd worked up conditions that reproducibly detected it with high sensitivity. They ran all my samples through the machine, and each one gave a response at the other end.

And those numbers were my raw data - but it's useful to think about what they represented. The machine was set to monitor a particular combination of ions, which would be produced by my desired product. As the sample was pumped through a purification column, the material coming out the far end was continuously monitored for those specific ions, and when they showed up, the machine would count the response it detected and display this as a function of time: a flat line, then a curvy, pointed, peak which went up and then came back down as the material of interest emerged from the column and dwindled away again.

So the numbers the machine gave me were the area under the curve of that peak, and that means, technically, that we're one step away from raw numbers right there. After all, area-under-the-curve is something that's subject to the judgment of a program or a person - where, exactly, does this curve start, and where does it end? Modern analytical machines are quite good at judging this sort of thing, but it's always good to look over their little electronic shoulders to make sure that their calls look correct to you. If you want to be hard-core about it, the raw data would be the detector response for each individual reading, at whatever frame rate the machine was sampling at. That's even more raw than most people need - actually, while writing this, I had to think for a moment to picture the data at that level, because it's not something I'd usually see or worry about. For my purposes, I took the areas that were calculated, since the peak shapes looked good, and the machine's software was able to evaluate them consistently and didn't have to apply any sort of correction to them to meet its own quality standards.

So there's one set of numbers. But the person running the machine had taken the trouble (as they should have) to run a standard curve using my supplied reference compound. That is, they'd dissolved it up to a series of ever-more-dilute solutions, and run those through the machine beforehand. This, plotted as peak area versus the real concentration, gave pretty much a straight line (as it should), and the machine's software was set up to use this information to also calculate a concentration for every one of my product peaks. So the data set that I got had the standard plot, followed by the experiments themselves, with both the peak areas and the resulting calculated amounts. Since these were related by what was very nearly a straight line, I probably could have used either one. But it's important to realize the difference: by using the calculated concentrations, I could either be correcting for a defect in the machine (if its detector response really wasn't quite linear), or I could be introducing error (if the standard solutions hadn't been made up quite right) It's up to you, as a scientist, to decide which way to go. In my case, I worked up the data both ways, and found that the resulting differences were far too small to worry about. So far, so good.

But there's another layer: I had done these experiments in triplicate. There were actually thirty-six vials for the twelve different conditions, because I wanted to see how reproducible the experiments were. For my final plots, then, I used the averages of the three runs for each reaction, and plotted the error bars thus generated to show how tight or loose these values really were. That's what I meant about the area numbers versus the concentration numbers question not meaning much in this case. Not only did they agree very well, but the variations between them were far smaller than the variations between different runs of the same experiments, and thus could safely be put in the "don't worry about it" category while interpreting the data.

What I did notice while doing this, though, was something else that was significant. My mass spec colleague had done something else which was very good practice: including a standard injection every so often during the runs of experimental determinations. Looking these over, I found that this same exact sample, of known concentration, was coming out as having less and less product in it as the process went on. That's certainly not unheard of - it usually means that the detector was getting less sensitive as time went on due to some sort of gradually accumulating gunk from my samples. Those numbers really should have been the same - after all, they were from the same vial - so I plotted out a curve to see how they declined with time. I then produced another column of numbers where I used that as a correction factor to adjust the data I'd actually obtained. The first runs needed little or no correction, as you'd figure, but by the end of the run, there were some noticeable changes.

So now I had several iterations of data for the same experiment. There was the raw raw data set, which I never really saw, and which would have been quite a pile if printed out. This was stored on the mass spec machine itself, in its own data format. Then I had numbers that I could use, the calculated areas of all those peaks. After that I had the corresponding concentrations, corrected for by the standard concentration curve run before the samples where injected. Then I had the values that I'd corrected for the detector response over time. And finally, once all this was done, I had the averages of the three duplicate runs for each set of conditions.

When I saved my file of data for this experiment, I took care to label everything I'd done. (I was sometimes lazier about such things earlier in my career, but I've learned that you can save ten minutes now only to spend hours eventually trying to figure out what you've done). The spreadsheet included all those iterations, each in a labeled column ("Area" "Concentration" "Corrected for response"), and both the standard curves and my response-versus-injection-number plots were included.

So how did my experiments look? Pretty good, actually. The error bars were small enough to see differences in the various conditions, which is what I'd hoped for, and some of those conditions were definitely much better than others. In fact, I thought I saw a useful trend in which ones worked best, and (as it turned out), this trend was even clearer after applying the correction for the detector response. I was glad to have the data; I've had far, far worse.

When presenting these results to my colleagues, I showed them a bar chart of the averages for the twelve different conditions, with the associated error bars plotted, which was good enough for everyone in my audience. If someone had asked to see my raw data, I would have sent them the file I mentioned above, with a note about how the numbers had been worked up. It's important to remember that the raw data are the numbers that come right out of the machine - the answers the universe gave you when you asked it a series of questions. The averages and the corrections are all useful (in fact, they can be essential), but it's important to have the source from which they came, and it's essential to show how that source material has been refined.

Comments (43) + TrackBacks (0) | Category: Life in the Drug Labs

December 1, 2009

Climategate and Scientific Conduct

Email This Entry

Posted by Derek

Everyone has heard about the "Climategate" scandal by now. Someone leaked hundreds of megabytes of information from the University of East Anglia's Climatic Research Unit, and the material (which appears to be authentic) is most interesting. I'm not actually going to comment on the climate-change aspect of all this, though. I have my own opinions, and God knows everyone else has one, too, but what I feel needs to be looked at is the scientific conduct. I'm no climatologist, but I am an experienced working scientist - so, is there a problem here?

I'll give you the short answer: yes. I have to say that there appears to be several, as shown by many troubling features in the documents that have come out. The first one is the apparent attempts to evade the UK's Freedom of Information Act. I don't see how these messages can be interpreted in any other way as an attempt to break the law, and I don't see how they can be defended:

Can you delete any emails you may have had with Keith re AR4?
Keith will do likewise. He's not in at the moment - minor family crisis. Can you also email Gene and get him to do the same? I don't have his new email address. We will be getting Caspar to do likewise.

A second issue is a concerted effort to shape what sorts of papers get into the scientific literature. Again, this does not seem to be a matter of interpretation; such messages as this, this, and this spell out exactly what's going on. You have talk of getting journal editors fired:

This is truly awful. GRL has gone downhill rapidly in recent years.
I think the decline began before Saiers. I have had some unhelpful dealings with him recently with regard to a paper Sarah and I have on glaciers -- it was well received by the referees, and so is in the publication pipeline. However, I got the impression that Saiers was trying to keep it from being published.

Proving bad behavior here is very difficult. If you think that Saiers is in the greenhouse skeptics camp, then, if we can find documentary evidence of this, we could go through official AGU channels to get him ousted. Even this would be difficult.

And of trying to get papers blocked from being referenced:

I can't see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow - even if we have to redefine what the peer-review literature is !

Two questions arise: is this defensible, and does such behavior take place in other scientific disciplines? Personally, I find this sort of thing repugnant. Readers of this site will know that I tend to err on the side of "Publish and be damned", preferring to let the scientific literature sort itself out as ideas are evaluated and experiments are reproduced. I support the idea of peer review, and I don't think that every single crazy idea should be thrown out to waste everyone's time. But I set the "crazy idea" barrier pretty low, myself, remembering that a lot of really big ideas have seemed crazy at first. If a proposal has some connection with reality, and can be tested, I say put it out there, and the more important the consequences, the lower the barrier should be. (The flip side, of course, is that when some oddball idea has been tried and found wanting, its proponents should go away, to return only when they have something sturdier. That part definitely doesn't work as well as it should.)

So this "I won't send my work to a journal that publishes papers that disagree with me" business is, in my view, wrong. The East Anglia people went even farther, though, working to get journal editors and editorial boards changed so that they would be more to their liking, and I think that that's even more wrong. But does this sort of thing go on elsewhere?

It wouldn't surprise me. I hate to say that, and I have to add up front that I've never witnessed anything like this personally, but it still wouldn't surprise me. Scientists often have very easily inflamed egos, and divide into warring camps all too easily. But while it may have happened somewhere else, that does not make it normal (and especially not desirable) scientific behavior. This is not a standard technique by which our sausage is made over here.

What I've seen in organic chemistry are various attempts to steer papers to particular reviewers (or evade other ones). And I've seen people fire off angry letters to journal editors about why some particular paper was published (and why the letter writer's manuscript in response had not been accepted in turn, likely as not). The biggest brawl of them all was still going early in my career (having started before I was born): the fight over the nonclassical norbornyl cation, the very mention of which is still enough to make some older chemists put their hands over their ears and start to hum loudly. That one involved (among many others) two future Nobel Prize winners (H. C. Brown and George Olah), and got very heated indeed - but I still don't recall either one of them trying to get journal editors fired after publishing rival manuscripts. You don't do that sort of thing.

And that brings up an additional problem with all this journal curating: the CRU people have replied to their critics in the past by saying that more of their own studies have been published in the peer-reviewed literature. This is disingenuous when you're working at the same time to shape the peer-reviewed literature into what you think it should look like.

A third issue I want to comment on are the problems with the data and its analysis. I have deep sympathy for the fellow who tried to reconcile the various poorly documented and conflicting data sets and buggy, unannotated code that the CRU has apparently depended on. And I can easily see how this happens. I've been on long-running projects, especially some years ago, where people start to lose track of which numbers came from where (and when), where the underlying raw data are stored, and the history of various assumptions and corrections that were made along the way. That much is normal human behavior. But this goes beyond that.

Those of us who work in the drug industry know that we have to keep track of such things, because we're making decisions that could eventually run into the tens and hundreds of millions of dollars of our own money. And eventually we're going to be reviewed by regulatory agencies that are not staffed with our friends, and who are perfectly capable of telling us that they don't like our numbers and want us to go spend another couple of years (and another fifty or hundred million dollars) generating better ones for them. The regulatory-level lab and manufacturing protocols (GLP and GMP) generate a blizzard of paperwork for just these reasons.

But the stakes for climate research are even higher. The economic decisions involved make drug research programs look like roundoff errors. The data involved have to be very damned good and convincing, given the potential impact on the world economy, through both the possible effects of global warming itself and the effects of trying to ameliorate it. Looking inside the CRU does not make me confident that their data come anywhere close to that standard:

I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO and one with, usually overlapping and with the same station name and very similar coordinates. I know it could be old and new stations, but why such large overlaps if that's the case? Aarrggghhh! There truly is no end in sight... So, we can have a proper result, but only by including a load of garbage!

I do not want the future of the world economy riding on this. And what's more, it appears that the CRU no longer has much of their original raw data. It appears to have been tossed over twenty years ago. What we have left, as far as I can see, is a large data set of partially unknown origin, which has been adjusted by various people over the years in undocumented ways. If this is not the case, I would very much like the CRU to explain why not, and in great detail. And I do not wish to hear from people who wish to pretend that everything's just fine.

The commentator closest to my views is Clive Crook at The Atlantic, whose dismay at all this is unhidden. I'm not hiding mine, either. No matter what you think about climate change, if you respect the scientific endeavor, this is very bad news. Respect has to be earned. And it can be lost.

Comments (170) + TrackBacks (0) | Category: Current Events | General Scientific News | The Dark Side | The Scientific Literature