Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

November 26, 2008

How Slow is Research Today? Here's a Recipe!

Email This Entry

Posted by Derek

The pace of research has noticeably slowed today here in the US. Most industrial labs will be empty tomorrow, Friday, and through the weekend, and even the academic labs will have fewer grad students and post-docs hanging out in them. I'll be cleaning up some previously run reactions, setting up anything that can comfortably go for a few days, and otherwise getting ready for Monday myself. This is not a day to try any tricky chemistry.

I also have a manuscript that I'm working on, and it would be a good use of my time to try to finish up its experimental section. The paper will likely be of interest to the readership here, so I'll be sure to note when it makes it into print. It'll be good to hit the scientific literature again; everything that's gone onto my list for the last year or two has been residual stuff from the Wonder Drug Factory, and there's not much of that left, naturally.

And I'll be observing a blog holiday until Monday as well, unless of course, something big happens. (I rather doubt that anything will, and considering what "something big" usually means, I rather hope nothing does). I'd like to wish all the US readers a happy Thanksgiving, and if anyone in the rest of the readership wants to try cooking a turkey, well, it's not as hard as it's cracked up to be. If you soak it in some salt water beforehand, it's quite tasty (my wife and I usually buy a kosher turkey, since they've already been salted). Allow me to finish up by furnishing the details of last night's synthetic work, at home in the kitchen with my two children:

Melt 3 tablespoons (43 grams) of butter and two squares of unsweetened baking chocolate (I used a coffee cup set in a pan of boiling water). Beat 3 eggs in a good-sized bowl. Then, in a small saucepan, combine 1 cup (240 mL) of corn syrup and 1/2 cup table sugar (100 g), and bring the mixture to a boil for about two minutes. (It doesn't look at first as if the sugar will go into solution, but it will - you naturally don't want this to cool down, though, once it has). Add the butter/chocolate mixture to the sugar syrup (they're not all that miscible, but do what you can), and add this gemisch slowly to the beaten eggs, stirring vigorously. (As I explained to my kids, if you were to dump these together with no stirring, you'd end up with chocolate-covered scrambled eggs; I try to teach them some technique along the way). Stir in a teaspoon (5 mL) of vanilla extract, 1 1/4 cups of pecan pieces (about 130 grams, I think), and pour the resulting slurry into a pie crust, your own or the store's. Bake about 45 minutes at 375F (190C, gas mark 5 for you subjects of the Queen). Yield: one chocolate pecan pie.

Comments (11) + TrackBacks (0) | Category: Blog Housekeeping

November 25, 2008

Avandia: Trouble, Run Head to Head

Email This Entry

Posted by Derek

Avandia (rosiglitazone) has been under suspicion for the last couple of years, after data appeared suggesting a higher rate of cardiovascular problems with its use. GlaxoSmithKline has been disputing this association all the way, as well they might, but today there’s yet more information to dispute.

A retrospective study in the Archives of Internal Medicine looked at about 14,000 patients on Medicare (older than 65) who were prescribed Avandia between 2000 and 2005. Now, looking backwards at the data is always a tricky business. For example, comparing these patients to another group that didn’t get the drug could be quite misleading – the obvious mistake there is that if someone has been prescribed Avandia, then they’re likely getting it because they’ve got Type II diabetes (or metabolic syndrome at least). Comparing that cohort to a group that isn’t showing such symptoms would be wildly misleading.

But this study compared the Avandia patients to 14,000 who were getting its direct competitor, Actos (pioglitazone). Now that’s more like it. The two drugs are indicated for the same patient population, for the same reasons. Their mechanism of action is supposed to be the same, too, as much as anyone can tell with the PPAR-gamma compounds. I wrote about that here – the problem with these drugs is that they affect the transcription of hundreds of genes, making their effects very hard to work out. Rosi and pio overlap quite a bit, but there are definitely (PDF) genes that each of them affect alone, and many others that they affect to different levels. Clinically, though, they are in theory doing the exact same thing.

But are they? This study found that the patients who started on Avandia had a fifteen per cent higher deaths-from-all-causes rate than the Actos group. To me, that’s a startlingly high number, and it really calls for an explanation. The Avandia group had a 13 per cent higher rate of heart failure, but no difference in strokes and heart attack, oddly. The authors believe that these latter two causes of death are likely to be undercounted in this population, though – there’s a significant no-cause-reported group in the data.

The authors also claim that the two populations were “surprisingly similar”, strengthening their conclusions. I think that that’s likely to be the case, given the similarities between the two drugs. GlaxoSmithKline, for their part, is saying that these numbers don’t match the safety data they’ve collected, and that a randomized clinical trial is the best way to settle such issues.

Well, yeah: a randomized clinical trial is the best way to settle a lot of medical questions. But neither GSK (nor Takeda and Lilly, makers of Actos) have seen fit to go head-to-head in one, have they? My guess is that both companies felt that the chances of showing a major clinical difference between the two was small, and that the size, length, and expense of such a trial would likely not justify its results. And if we’re talking about the beneficial mechanisms of action here, that’s probably true. You’d have quite a time showing daylight between the two drugs on things like insulin sensitivity, glycosylated hemoglobin, and other measures of diabetes. Individual patients may well show differences, and that's useful in practice - but that's a hard thing to show in a large averaged set of data. But how about nasty side effects? Maybe there's some room there - but in a murky field like PPAR-gamma, you'd have to have a lot of nerve to run a trial hoping to see something bad in your competitor's compound, while still being sure enough of your own. No, it's disingenuous to talk about how these questions need to be answered by a clinical trial, when you haven't done one, haven't planned one, and have (what seemed to be) good reasons not to.

This kind of study is the best rosi-to-pio comparison we're likely to get. And it does not look good for Avandia. GSK is going to have to live with that - and in fact, they already are.

Comments (4) + TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity | Toxicology

November 24, 2008

Two Drugs in One? Maybe Not.

Email This Entry

Posted by Derek

Since I was talking about Nitromed on Friday, let me mention another attempt to combine two known drugs into a new therapy. Another Cambridge company whose front doors I walk by once in a while is CombinatoRx. If they'd had that name back in the early 1990s, you'd have assumed that they did combinatorial chemistry, but their plan is to take approved drugs and find greater-than-the-sum-of-their-parts combinations to approve as a single pill.

That's not easy. It's hard enough figuring out just how single drugs behave in the real world, and any physician will tell you all about what fun it is to deal with drug interactions. Finding beneficial drug interactions, especially unknown ones, is a real uphill climb. But CombinatoRx thought they had one in the mixture of low-dose prednisolone and dipyridamole.

Prednisolone is a well-known corticosteroid which is used to suppress inflammation and the immunen response. Dipyridamole is a multi-mechanism drug that increases the free concentration of adenosine, and it's been used to inhibit clotting and lower pulmonary hypertension. Blood pressure problems are common with prednisolone, and the company believed that the prednisolone dose could be taken down to non-side-effect levels in the presence of the other drug. So they formulated a combination pill (Synavive, CRx-102) to test this out in osteoarthritis patients. The stakes were high - here's a writeup from before the results came out last month.

Things did not work out. The Phase IIb study definitively missed its endpoints. Not only did Synavive not compare to prednisolone alone, it didn't reach statistical significance versus the placebo group, either. The stock dropped 72% the next day, and the company has now announced layoffs that total 65% of its workforce.

What I have to wonder, though, is how things would have worked out in the long run even if the trial had succeeded. As Nitromed's experience shows, it's a hard business convincing insurers to pay a premium for two generic drugs just because they're now available in one pill. I know that CombinatoRx was making much out of their proprietary formulation, no doubt anticipating such objections. But I wonder if a company in this space would have to actually run a head-to-head against the two-generic-pill dosing regimen to really convince people that it had something to offer. And that would take nerves of steel, for sure. . .

Comments (12) + TrackBacks (0) | Category: Business and Markets | Clinical Trials

November 21, 2008

The Back Door to the Stock Market

Email This Entry

Posted by Derek

Nitromed has been in trouble for several years now. They're a perfect example of a dog that caught a car: a company that was demolished by actually getting its drug on the market. No one wanted to pay for it, though, and after all that expense the company was left worse off than before.

Their remnants, though, are still listed on the NASDAQ, and that's more than a lot of more promising companies can say. One of those is Archemix, a company that I see every day on my way to work in Cambridge. They're working on aptamer-based therapies, and are at the stage in their life where (under normal conditions) they'd be thinking about an IPO. Well, they were thinking about that, but. . .these aren't exactly normal conditions, and the company recently announced that it was shelving that plan for now.

Enter Nitromed! The two companies have now announced a merger (a 70:30 stock split) which will keep the Archemix name and be listed on the stock exchange like Nitromed. Just add water, and you have an IPO, albeit with some dilution compared to the traditional way. Interestingly, the CEO of Nitromed will be the CEO of the new company, which I gather is coming as a bit of a surprise to the folks in the trenches at Archemix, as you might imagine.

But good luck to them all - in this environment, we all need it. Aptamers are an interesting and risky business, but not as risky as developing cardiovascular drugs that no one wants. . .

Comments (8) + TrackBacks (0) | Category: Business and Markets

November 20, 2008

Noisy Numbers

Email This Entry

Posted by Derek

A colleague e-mailed me last night with an observation that he’d heard recently: “Have you noticed,” he said, “that the number we use get less and less precise, the farther away they get from the chemists?”

Thinking about it, I’d have to say that’s right, although I don’t think that we can claim any particular credit. After all, we have our feet planted in physics. Our molecular weights are based on the weights of the elementary particles, which are known. . .pretty exactly. And we’ve got a pretty good handle on molecular formulae, too, so we can go around getting mass spectra out to four decimal places and learning all kinds of things from them.

But then when these compounds get run through the primary assays, purified enzymes or the like, the numbers start getting fuzzier. Protein preps are all subtly different – ideally, they should be different in ways that make no difference, but then there’s the actual running of the assay to consider. Reproducibility varies, but no on gets worked up about a compound that shows, say, a 3 nanomolar inhibition in one assay and a six nanomolar in another. “Single digit nanomolar” is all we need to know, and it’s good odds that the next one will split the difference and come in at four or five, anyway.

But then you go to cellular assays, and things get more complicated. Cells are ridiculously more complex than enzymes, and there are so many more things that can kick around your data. Where did this batch of cells come from? How many times have they divided? What stage of their life cycles are they in, on average? What are they growing on, and in? Are they clean (no nasty mycoplasms?) Even if you’ve got all those things under control, your compounds are going to be exposed to untold numbers of other proteins now, all with potential binding sites and activities of their own. And that’s if they can even get past the cell membrane at all – many don’t, for reasons that are not always clear. No, your cellular numbers are always going to have a pretty good spread in them.

But then you go to whole animals, which have all those problems and more. Absorption from the gut and later metabolism are tricky and poorly understood processes, and they’re affected by a bewildering number of variables. Is your compound crystalline? Same way each time? What’s the particle size? How much water does that powder have in it? What are you taking the compound up in to dose it? Have the rats eaten recently? What time of day are they getting the compound? Male rats, or female? Nothing bothering them, no loud noises or change in lighting? Every single one of these things can throw your data around all over the place.

But now you’re up to clinical trials, and animal data is as orderly as a brick wall compared to human data. All those variables listed above still obtain, although you've presumably controlled for several of them by the time you're in the clinic. But that's more than made up for by the heterogeneity of your human volunteers and that of your all-too-human clinical staff. (Ask anyone who's worked up close with clinical data, and you'll hear all about it).

So we start from chemistry, where if we make a compound once we assume that we can always make it again - not always a warranted assumption, mind you, but mostly true. Then we move to in vitro assays, where you really need to have n-of-3, at least, so you can get error bars on your numbers. And we end up in human trials with hundreds (or thousands) of people taking the resulting drug, desperately hoping all the while that we'll be able to pick out an interpretable signal in all the noise. That's the business, all right.

Comments (16) + TrackBacks (0) | Category: Drug Development

November 19, 2008

Novartis and Reality

Email This Entry

Posted by Derek

I know that it’s not necessarily fair to drag out old press releases, but let’s do it anyway. Many readers will remember a few years back when Novartis was making its big research move into Cambridge, renovating the old Necco candy building and hiring like mad. (We’ll pause for a bit of somber nostalgia at the memory of a large drug company actually hiring hordes of scientists).

While that was going on, there was a lot of talk about the way their research site was going to be run. Under its new research head, Mark Fishman, Novartis would "reinvent the way drugs are discovered" (I quote from an August 2003 article from the Boston Globe, behind their subscriber wall now, which irritated me quite a bit at the time). There was a lot of talk about Gleevec, and how this was going to be some sort of model for the future of drug discovery in the organization. (I could never quite follow that one, but I was willing to give them the benefit of the doubt). The whole thing would be a "research operation vastly different from traditional pharmaceutical research", to quote another old Globe article (May 2002).

Well, some years on now, the obvious question is: did any of this happen? Novartis as a company is doing fairly well, particularly in comparison to some of its peers. And they haven’t had any massive layoffs, to my knowledge, which puts them ahead of the game these days. So overall, the company has been successful: but is the Cambridge site the sort of place it was supposed to be, according to the original PR?

My impression is that it isn’t, at least not to the extent that we were all hearing about back then. I know a number of people who work there, and from the outside, at least, it seems to be pretty much like any other large drug research operation, albeit with less elbow room than usual in some of the labs and offices (a deliberate decision, apparently). I hear the usual talk and the usual complaints. Nothing that goes on over there strikes me as very different from other outfits of that size.

And there’s nothing wrong with that. This isn't a slap at Novartis, at Mark Fishman, or at anyone over there - it's a very good research organization. But I do wonder where all that transformational talk went. Is it still a work in progress (which seems to be the official viewpoint)? Did the organization try to change things, and fail? Was there even a clear idea of what this change was to consist of? Was there a decision made at some point that since things seemed to be going reasonably well, that the company should just leave the site to develop as it was? Or was all that talk at the beginning nothing more than, well, talk? I wondered about this at the time, and I suppose I'm still wondering now. . .

Comments (24) + TrackBacks (0) | Category: Drug Industry History

November 18, 2008

Cheese Dip and Hydrochloric Acid

Email This Entry

Posted by Derek

One of the more wide-ranging on my “Lowe’s Laws of the Lab” list is this: The secret of success in synthetic chemistry is knowing what you can afford not to worry about.

That’s because you have to have a cutoff somewhere. There are so many potential things that can affect an experiment, and if you have to sweat every one of them out every time, you’re never going to get anything done. So you need to understand enough to know which parts are crucial and which parts aren’t. I think the beginnings of this law came from my days as a teaching assistant, watching undergraduates carefully weigh out a fivefold excess of reagent. Hmm. Did it matter if they were throwing in 4.75 equivalents or 5.25? Well, no, probably not. So why measure it out drop by drop?

Tom Goodwin, the professor responsible for teaching me inmy first organic chemistry course, once advanced his own solution to this problem. Growing weary of the seemingly endless stream of lab students asking him “Dr. Goodwin, I added X by mistake instead of Y. . .will that make a difference?”, he proposed creating “Goodwin’s Book of Tolerances.” I think he envisioned this as a thick volume like one of those old unabridged dictionaries, something that would live on its own special stand down the hall. “That way,” he told me, “when some student comes up and says ‘Dr. Goodwin, I added cheese dip instead of HCl – will that make a difference?’, I can walk over, flip to page thousand-and-whatever, and say ‘No. Cheese dip is fine.’”

According to him, a solid majority of these questions ended with the ritual phrase “Will that make a difference?” And that’s just what a working chemist needs to know: what will, and what won’t. The challenge comes when you’re not sure what the key features of your system are, which is the case in a lot of medicinal chemistry. Then you have to feel your way along, and be prepared to do some things (and make some compounds) that in retrospect will look ridiculous. (As I’ve said before, though, if you’re not willing to look like a fool, you’re probably never going to discover anything interesting at all).

Another challenge is when the parts of the system you thought were secure start to turn on you. We see that all the time in drug discovery projects – that methyl group is just what you need, until you make some change at the other end of the molecule. Suddenly its suboptimal – and you really should run some checks on these things as you go, rather than assuming that all your structure-activity relationships make sense. Most of them don’t, at some point. An extreme example of having a feature that should have been solid turn into a variable would be that business I wrote about the other week, where active substances turned out to be leaching out of plastic labware.

But if you spend all your time wondering if your vials are messing up your reactions, you'll freeze up completely. Everything could cause your reaction to go wrong, and your idea to keel over. Realize it, be ready for it - but find a way not to worry about it until you have to.

Comments (18) + TrackBacks (0) | Category: Lowe's Laws of the Lab | Who Discovers and Why

November 17, 2008

Liable For Generics? You Are Now!

Email This Entry

Posted by Derek

There was a legal ruling last week in California that we’re going to hear a lot more of in this business. Conte v. Wyeth. This case involved metaclopramide, which was sold by Wyeth as Reglan before going off-patent in 1982. The plaintiff had been prescribed the generic version of the drug, was affected by a rare and serious neurological side effect (tardive dyskinesia, familiar to people who’ve worked with CNS drugs) and sued.

But as you can see from the name of the case, this wasn’t a suit against her physician, or against the generic manufacturer. It was a suit against Wyeth, the original producer of the drug, and that’s where things have gotten innovative. As Beck and Herrmann put it at the Drug and Device Law Blog:

The prescribing doctor denied reading any of the generic manufacturer's warnings but was wishy-washy about whether he might have read the pioneer manufacturer's labeling at some point in the more distant past.

Well, since the dawn of product liability, we thought we knew the answer to that question. You can only sue the manufacturer of the product that injured you. Only the manufacturer made a profit from selling the product, and only the manufacturer controls the safety of the product it makes, so only the manufacturer can be liable.

Not any more, it seems. The First District Court of Appeals in San Francisco ruled that Wyeth (and other drug companies) are also liable for harm caused by the generic versions of their drugs. At first glance, you might think “Well, sure – it’s the same drug, and if it causes harm, it causes harm, and the people who put it on the market should bear responsibility”. But these are generic drugs we’re talking about here – they’ve already been on the market for years. Their behavior, their benefits, and their risks are pretty well worked out by the time the patents expire, so we’re not talking about something new or unexpected popping up. (And in this case, we're talking about a drug that has been generic for twenty-six years).

The prescribing information and labeling has been settled for a long time, too, you’d think. At any rate, that’s worked out between the generic manufacturers and the FDA. How Wyeth can be held liable for the use of a product that it did not manufacture, did not label, and did not sell is a mystery to me.

Over at Law and More, a parallel is drawn between this ruling and the history of public nuisance law during the controversy over lead paint; the implication is that this ruling will stand up and be with us for a good long while. But at Cal Biz Lit, the betting is that “this all goes away at the California Supreme Court”. We’ll see, because that’s exactly where it’s headed and maybe beyond that, eventually.

And if this holds up? Well, Beck and Herrmann lay it out in their extensive follow-up post on the issue, which I recommend to those with a legal interest:

Conte-style liability can only drive up the cost of new drugs – all of them. Generic drugs are cheaper precisely because their manufacturers did not incur the cost of drug development – costs which run into the hundreds of millions of dollars for each successful FDA approval. Because they are cheap, generics typically drive the pioneer manufacturer’s drug off the market (or into a very small market share) within a few years, if not sooner. Generic drugs will stay cheap under Conte. But imposing liability in perpetuity upon pioneer manufacturers for products they no longer sell or get any profit from means that the pioneer manufacturers (being for-profit entities) have to recoup that liability expense somewhere. There’s only one place it can come from. That’s as an add-on to the costs of new drugs that still enjoy patent protection.

Exactly right. This decision establishes a fishing license for people to go after the deepest-pocketed defendents. Let’s hope it’s reversed.

Comments (31) + TrackBacks (0) | Category: Regulatory Affairs | The Central Nervous System | Toxicology

November 14, 2008

Sticking It to Proteins

Email This Entry

Posted by Derek

So, you’re making an enzyme inhibitor drug, some compound that’s going to go into the protein’s active site and gum up the works. You usually want these things to be potent, so you can be sure that you’ve knocked down the enzyme, so you can give people a tiny, convenient pill, and so you don’t have to make heaps of the compound to sell. How potent is potent? And how potent can you get?

Well, we’d like nanomolar. For the non-chemists in the crowd, that’s a concentration measure based on the molecular weight of the compound. If the molecular weight of the drug is 400, which is more typical than perhaps it should be, then 400 grams of the stuff is one mole. And 400 grams dissolved in a liter of solvent to make a liter of solution would then give you a one molar (1 M) solution. (The original version of this post didn't make that important distinction, which I'll chalk up to my not being completely awake on the train ride first thing in the morning. The final volume you get on taking large amounts of things up in a given amount of solvent can vary quite a bit, but concentration is based, naturally, on what you end up with. And it’s a pretty flippin’ unusual drug substance than can be dissolved in water to that concentration, let me tell you right up front). So, four grams in a liter would be 0.01 M, or 10 millimolar, and foru hundred milligrams per liter would be a 1 millimolar solution. A one micromolar solution would be 400 micrograms (0.0004 grams) per liter, and a one nanomolar solution would be 400 nanograms (400 billionths of a gram) per liter. And that’s the concentration that we’d like to get to show good enzyme inhibition. Pretty potent, eh?

But you can do better – if you want to, which is a real question. Taking it all the way, your drug can go in and attach itself to the active site of its target by a real chemical bond. Some of those bond-forming reactions are reversible, and some of them aren’t. Even the reversible ones are a lot tighter than your usual run of inhibitor.

You can often recognize them by their time-dependent inhibition. With a normal drug, it doesn’t take all that long for things to equilibrate. If you leave the compound on for ten, twenty, thirty minutes, it usually doesn’t make a huge difference in the binding constant, because it’s already done what it can do and reached the balance it’s going to reach. But a covalent inhibitor, that’ll appear to get more and more potent the longer it stays in there, since more and more of the binding sites are being wiped out. (One test for reversibility after seeing that behavior is to let the protein equilibrate with fresh blank buffer solution for a while, to see if its activity ever comes back). You can get into hair-splitting arguments if your compound binds so tightly that it might as well be covalent; at some point they're functionally equivalent.

There are several drugs that do this kind of thing, but they’re an interesting lot. You have the penicillins and their kin – that’s what that weirdo four-membered lactam ring is doing, spring-loaded for trouble once it gets into the enzyme. The exact same trick is used in Alli (orlistat), the pancreatic lipase inhibitor. And there are some oncology drugs that covalently attach to their targets (and, in some cases, to everything else they hit, too). But you’ll notice that there’s a bias toward compounds that hit bacterial enzymes (instead of circulating human ones), don’t get out of the gut, or are toxic and used as a last resort.

Those classes don’t cover all the covalent drugs, but there’s enough of that sort of thing to make people nervous. If your compound has some sort of red-hot functional group on it, like some of those nasty older cancer compounds, you’re surely going to mess up a lot of other proteins that you would rather have left alone. And what happens to the target protein after you’ve stapled your drug to it, anyway? One fear has been that it might present enough of a different appearance to set off an immune response, and you don’t want that, either.

But covalent inhibition is actually a part of normal biochemistry. If you had a compound with a not-so-lively group, one that only reacted with the protein when it got right into the right spot – well, that might be selective, and worth a look. The Cravatt lab at Scripps has been looking into what kinds of functional groups react with various proteins, and as we get a better handle on this sort of thing, covalency could make a comeback. Some people maintain that it never left!

Comments (22) + TrackBacks (0) | Category: Drug Assays | Toxicology

November 13, 2008

The Yield Monster - And Its Friend, The Model Monster

Email This Entry

Posted by Derek

Organic chemisty can be a real high-wire act. If you’re taking a compound along over a multistep sequence, everything has to work, at least to some extent: a twelve-step route to a compound whose last step can’t be made to work isn’t a route to the compound at all. To get the overall yield you multiply all the individual ones, and a zero will naturally take care of everything that came before it.

Even very respectable yields will creep up on you if you have the misfortune to be doing a long enough synthesis. It’s just math – if you have an average 90% yield, which shouldn’t usually be cause for distress, that means that you’re only going to get about 35% of what you theoretically could have after ten steps (0.9 to the tenth). An average 95% yield will run that up to 60% over the same sequence, and there you have one of the biggest reason for the importance of process chemistry groups. Their whole reason to live is to change those numbers, to make sure that they stay that way every time, and without having to do anything crazier than necessary along the way.

When you’re involved in something like this and you know you’re going to be approaching a tricky step, the natural temptation is to try it out on something else first. Model systems, though, can be the road to heartbreak. In the end, there are no perfect models, of anything. If you’re lucky, the conditions you’ve worked out by using your more-easily-available model compound will translate to your precious one. But as was explained to me years ago in grad school, the problem is that if you run your model and it works, you go on to the real system. And if you run your model and it doesn’t work, well. . .you might just go on to the real system anyway, because you’re not sure if your model is a fair one or not. So what’s the point?

This gets to be a real problem in some labs. While ten steps is medium to long for a commercial drug synthesis, it’s just the warmup for a lot of academic ones. Making natural products by total synthesis can take you on up into the twenty- and thirty-step levels, and some go beyond that, most horribly for everyone concerned. In such cases, you’d much rather have several segments of the big honking molecule built separately and then hooked together, rather than run everything in a row.

But what if you spend all that time on the segments, but you can’t put the things together? The most famous example of that I know happened in Nicolaou’s synthesis of Brevetoxin B. The initial disconnection of this terrible molecule into two nearly-as-awful pieces turned out to have been a mistake. Despite repeated attempts, no way could be found to couple the two laboriously prepared pieces to make the whole molecule, and untold man-hours of grad-student and post-doc slave labor had to be ditched for a new approach. If you want to see the approach that worked, here’s a PDF of a talk about it.

But if you go linear, you’re taking the same risk, and the math will absolutely eat you alive. A 90% average yield will ensure that you throw away 95% of your material if you keep going for 28 steps. And keeping a 90% average over twenty-eight steps is just not possible with real-world chemistry, either – and yes, I’ve seen those papers where they do, but I don’t believe them. Do you? Make it 25 steps of average 90%, and three 60% losers, and now you’re down between one and two percent of your material left. Which is no way to live.

I note that the above summary of the Brevetoxin synthesis counts 123 synthetic steps. It calculates an average yield of 91%. A 2004 synthesis from Japan comes to 90 steps with an average yield of 93%.

Comments (17) + TrackBacks (0) | Category: Life in the Drug Labs

November 12, 2008

Crestor: Would It Save Any Lives?

Email This Entry

Posted by Derek

Should millions more people be taking Crestor? That’s a real balancing act. You have a decrease in heart attacks, but from a fairly small incidence rate. So at a minimum, you’ll need to balance the costs of those coronary events versus the cost of paying for all that Crestor. And statins are not without side effects themselves, so you’ll need to adjust your figures for the incidence of rhabdomyolosis, among other things. (For example, is the increased evidence of high blood sugar in the Crestor treatment group a real effect, or not? If so, you’ll need to add a bit of diabetes cost to the spreadsheet). In any case, the cost of getting all these people screened for C-reactive protein levels in the first place needs to be added in as well.

Naturally, as in any of these calculations, you’re going to have to figure how much should be spent to prevent each excess death, once you’ve decided that these deaths can indeed be considered excess. (Unfortunately, the answer cannot always be “as much as it takes”, since there is not enough money in the world to treat everyone for everything, forever). And that brings up another key question: would putting high-CRP patients on Crestor save lives at all?

Well, you’d think so, what with lowering the incidence of those coronary events. But mortality figures are tricky. In all the graphs presented in the NEJM paper, the “deaths from all causes” one is the least compelling. That shouldn’t be a real surprise, since cutting something down in the 1% range isn’t going to bend the curve very much on its own. But if you look closer at the data, things are even fuzzier.

As pointed out to me by a correspondent, the Crestor-treated group for some reason showed a lower death rate from cancer (35 deaths versus 58). It doesn’t seem particularly likely that this is a real effect – I’ve never heard of statins showing a protective effect like this, although if someone knows differently, I’d be glad to hear about it. The paper makes nothing of this comparison, at any rate. Minus this effect, though, the death rate between the two groups might well be within the error bars. The argument for Crestor would then have to be made purely on treatment costs, as in the first paragraph, because you’d be saving few, if any, lives at all.

And maybe there’s a case to be made. I’m not a public health expert, so I don’t know what numbers to put into those calculations. But it’s important to realize, contrary to some of the headlines out there, that it’s actually a hard call to make. I note that AstraZeneca is being cautious about what all this means for sales of Crestor. They’re wise to be.

Comments (20) + TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials

November 11, 2008

Wash Your Tubes; Mess Up Your Data

Email This Entry

Posted by Derek

I wrote a while back about the problem of compounds sticking to labware. That sort of thing happens more often than you’d think, and it can really hose up your assay data in ways that will send you running around in circles. Now there’s a report in Science of something that’s arguably even worse. (Here's a good report on it from Bloomberg, one of the few to appear in the popular press).

The authors were getting odd results in an assay with monoamine oxidase B enzyme, and tracked it down to two compounds leaching out of the disposable plasticware (pipette tips, assay plates, Eppendorf vials, and so on). Oleamide is used as a “slip agent” to keep the plastic units from sticking to each other, but it’s also a MAO-B inhibitor. Another problem was an ammonium salt called DiHEMDA, which is put in as a general biocide – and it appears to be another MAO-B inhibitor.

Neither of them are incredibly potent, but if you’re doing careful kinetic experiments or the like, it’s certainly enough to throw things off. The authors found that just rinsing water through various plastic vessels was enough to turn the solution into an enzyme inhibitor. Adding organic solvents (10% DMSO, methanol) made the problem much worse; presumably these extract more contaminants.

And it’s not just this one enzyme. They also saw effects on a radioligand binding assay to the GABA-A receptor, and they point out that the biocides used are known to show substantial protein and DNA binding. These things could be throwing assay data around all over the place – and as we work in smaller and smaller volumes, with more complex protocols, the chances of running into trouble increase.

What to do about all this? Well, at a minimum, people should be sure to run blank controls for all their assays. That’s good practice, but sometimes it gets skipped over. This effect has probably been noted many times before as some sort of background noise in such controls, and many times you should be able to just subtract it out. But there are still many experiments where you can’t get away from the problem so easily, and it’s going to make your error bars wider no matter what you do about it. There are glass inserts for 96-well plates, and there are different plastics from different manufacturers. But working your way through all that is no fun at all.

As an aside, this sort of thing might still make it into the newspapers, since there have been a lot of concerns about bisphenol A and other plastic contaminants. In this case, I think the problem is far greater for lab assays than it is for human exposures. I’m not so worried about things like oleamide, since these are found in the body anyway, and can easily be metabolized. The biocides might be a different case, but I assume that we’re loaded with all kinds of substances, almost all of them endogenous, that are better inhibitors of enzymes like MAO-B. And at any rate, we’re exposed to all kinds of wild stuff at low levels, just from the natural components of our diet. Our livers are there to deal with just that sort of thing, but that said, it’s always worth checking to make sure that they’re up to the job.

Comments (9) + TrackBacks (0) | Category: Biological News | Drug Assays

November 10, 2008

Crestor: Risks Up, Risks Down

Email This Entry

Posted by Derek

AstraZeneca took a pretty big risk in running a trial as big as the JUPITER one, but it seems to have paid off for them. As everyone has been reading, it appears that their Crestor (rosuvastatin), lowers the risk of cardiovascular events in patients with elevated C-reactive protein, even those with reasonable cholesterol numbers. (NEJM paper here).

These patients don’t have an awful lot of heart attacks, but they did have less while on the drug. That’s going to be enough, all by itself, to expand the market for Crestor (and probably the other statins as well). The question is whether the others will have the same effect. You’d think so, especially a similar strong one like Lipitor, but AstraZeneca is the only company with numbers for its own product.

The question will be whether it’s worth treating such a wider patient population at these intent-to-treat numbers, a point made in an accompanying editorial in the New England Journal of Medicine:

The relative risk reductions achieved with the use of statin therapy in JUPITER were clearly significant. However, absolute differences in risk are more clinically important than relative reductions in risk in deciding whether to recommend drug therapy, since the absolute benefits of treatment must be large enough to justify the associated risks and costs. The proportion of participants with hard cardiac events in JUPITER was reduced from 1.8% (157 of 8901 subjects) in the placebo group to 0.9% (83 of the 8901 subjects) in the rosuvastatin group; thus, 120 participants were treated for 1.9 years to prevent one event.

It’s interesting to imagine these numbers flipped over, though – if a drug caused heart attacks at these same statistical levels in these same patients, it would be taken off the market immediately. Look, for example, at the risks of cardiovascular problems with Vioxx. The VIGOR trial showed 17 heart attacks in a group of over 4,000 patients, a rate (at the highest dose) of about four times the naproxen-treated control group. In relative risk terms, that’s a serious alarm bell – but in absolute risk, not so much.

This isn’t a completely fair comparison, of course – in the case of statins, cardiovascular events are what you’re trying to treat for in the first place, as opposed to having them as a totally unrelated side effect in a pain medication. And there were other options than a Cox-2 inhibitor for many (although not for all) of the people taking Vioxx. And there’s the general primum non nocere principle: when we find that a drug is causing actual harm (as opposed to doing nothing), it’s likely to be withdrawn, even if the harm is at very low statistical levels.

But at the same time, not giving people something that could prevent these heart attacks is still rather equivalent to causing said heart attacks – isn’t it? We have to make the call of whether the cost, and the statin side effects, are worth it. That’s not an easy one (for one thing, there was a statistically significant difference in the number of Crestor-treated patients showing diabetic symptoms in this trial). And when a drug shows harmful side effects, we should make the call in the same way. I just don’t see the two situation treated in a similar manner much of the time, though.

Comments (17) + TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials

November 7, 2008

System Biology: Ready, or Not?

Email This Entry

Posted by Derek

Systems biology – depending on your orientation, this may be a term that you haven’t heard yet, or one from the cutting edge of research, or something that’s already making you roll your eyes at its unfulfilled promise. There’s a good spread of possible reactions.

Broadly, I’d say that the field is concerned with trying to model the interactions of whole biological systems, in an attempt to come up with come explanatory power. It’s the sort of thing that you could only imagine trying to do with modern biological and computational techniques, but whether these are up to the job is still an open question. This gets back to a common theme that I stress around here, that biochemical networks are hideously, inhumanly complex. There’s really no everyday analogy that works to describe what they’re like, and if you think you really understand them, then you’re in the same position as all those financial people who thought they understood their exposure to mortgage-backed security risks.

You’ll have this enzyme, you see, that phosphorylates another enzyme, which increases its activity. But that product of that second enzyme inhibits another enzyme that acts to activate the first one, and each of them also interacts with fourteen (or forty-three) others, some of which are only expressed under certain conditions that we don’t quite understand, or are localized in the cell in patterns that aren’t yet clear, and then someone discovers a completely new enzyme in the middle of the pathway that makes hash out of what we thought we knew about

So my first test for listening to systems biology people is whether they approach things with the proper humility. There’s a good article in Nature on the state of the field, which does point out that some of the early big-deal-big-noise articles in the field alienated many potential supporters through just this effect. But work continues, and a lot of drug companies are putting money into it, under the inarguable “we need all the help we can get” heading.

One of the biggest investors has been Merck, a big part of that being their purchase a few years ago of Rosetta Inpharmatics. That group published an interesting paper earlier this year (also in Nature) on some of the genetic underpinnings of metabolic disease. A phrase from the article's abstract emphasizes the difficulties of doing this work: "Our analysis provides direct experimental support that complex traits such as obesity are emergent properties of molecular networks that are modulated by complex genetic loci and environmental factors." Yes, indeed.

But here’s a worrisome thing that didn’t make the article: Merck recently closed the Seattle base of the Rosetta team, in its latest round of restructuring and layoffs. One assumes that many of them are being transitioned to the Merck mothership, and that the company is still putting money into this approach, but there is room to wonder. Update: here's an article on this very subject). There is this quote from the recent overview:

Stephen Friend, Merck's vice-president for oncology, thinks that any hesitancy will be overcome when the modelling becomes so predictive that the toxicity and efficacy of a potential drug can be forecast very accurately even before an experimental animal is brought out if its cage. "The next three to five years will provide a couple such landmark predictions and wake everyone up," he says.

Well, we’ll see if he’s right about that timeframe, and I hope he is. I fear that the problem is one of those that appears large, and as you get closer to it, does nothing but get even larger. My opinion, for what it’s worth, is that it’s very likely too early to be able to come up with any big insights from the systems approach. But I can’t estimate the chances that I’m wrong about that, and the potential payoffs are large. For now, I think the best odds are in the smaller studies, narrowing down on single targets or signaling networks. That cuts down on the possibility that you’re going to find something revolutionary, but it increases the chance that anything you find is actually real. Talk of “virtual cells” and “virtual genomes” is, to my mind, way premature, and anyone who sells the technology in those terms should, I think, be regarded with caution.

But that said, any improvement is a big one. Our failure rates due to tox and efficacy problems are so horrendous that just taking some of these things down 10% (in real terms) would be a startling breakthrough. And we’re definitely not going to get this approach to work if we don’t plow money and effort into it; it’s not going to discover itself. So press on, systems people, and good luck. You’re going to need it; we all do.

Comments (33) + TrackBacks (0) | Category: Biological News

November 6, 2008

CB-1 Obesity Drugs: Farewell to the Whole Lot

Email This Entry

Posted by Derek

The painful saga of Acomplia (rimonabant) has finally come to an end. Sanofi-Aventis has announced that they're completely giving up on the drug. There was really no other option - the compound was never approved in the US, and was never going to be, and late in October the EU ordered it to be withdrawn from Europe. The psychiatric side effects which sank the drug's chances here were showing up in real-world use, and the risk/benefit ratio could no longer be seen as anything but negative.

And Pfizer has just announced that they're giving up work on their own Phase III compound in the area, CP-945,598. They're not citing safety concerns - and as Jim Edwards over at Bnet notes, that puts them in the odd position of saying that they have a safe, effective drug for a huge market that they're not going to do anything with. My guess is that the company is worried that the drug would indeed show an unfavorable safety profile, especially under the sort of scrutiny that any drug in this class would have by now, and that they decided to stop before things got to that point. Otherwise, you'd think that a big, safe, effective first-in-class obesity therapy would be just what Pfizer needs - wouldn't you?

So, goodbye to the CB-1 antagonists. I don't see much work going on in this area for some time to come, unless the pharmacology gets untangled to the point that someone can see a safe way through. There may well not be one.

And before we all try to forget that this all happened, let's spare a thought for the huge amounts of time, effort, brainpower and money that went into this area over the last eight or ten years. Three of the biggest research organizations in the industry have now flamed out trying to develop these drugs, and plenty of smaller players were trying, too, as a glance at the patent literature will make clear. The end result is that we have paid a gigantic amount of money to learn that the biology is more complicated than we thought, and it needed no ghost come from the grave to tell us this. If you think that drug development is a sure road to riches - if anyone still thinks that - then come survey this wreckage and think again.

And to finish, let's hop in the time machine and go back. . .well, not all that far. Just to mid-2006. There we find a world in which rimonabant was poised to become one of the biggest selling drugs in all the world, part of a wave of drugs which would transform the industry and spew profits in all directions. Billions of dollars in revenues are mentioned. Oh, dear.

Comments (26) + TrackBacks (0) | Category: Diabetes and Obesity | Drug Development

November 5, 2008

We Now Return to Our Regularly Scheduled Program

Email This Entry

Posted by Derek

About a year ago I wrote a post on flow chemistry. That, broadly speaking, is the practice of doing reactions by pumping them through some sort of reaction zone, instead of putting everything into a flask and letting it rip.

There are refinements. In batch mode, you can of course add reagents in sequence, or trickle them in by slow addition. And there are several variations to flow chemistry - in my mind, I have three categories. Type I flow reaction, in my numbering, are the ones that don't depend on any reagents in the tubes themselves. Everything you need is in solution, and you're just using temperature and/or pressure to make them do what you want. Nucleophilic displacements and cycloadditions are in this category: mix up your starting materials, pump 'em down the hollow tube, and get your product out the other end. Ideally.

Type II flow reactions, then, are the ones that need some sort of solid-supported catalyst. Palladium couplings (or other metal-catalyzed processes) are a perfect example of this, as is the H-Cube hydrogenator. Now you have some solid matrix inside your tubing, and you're pumping material over that. Heat and pressure are still very much a part of things, but the catalyst is, too - and the advantage here is that it doesn't end up in your reaction mixture. Starting materials should go in, and product should come out, and you should be able to use the catalyst again. Ideally.

And Type III flow reactions, in my scheme, are the ones that need full equivalents (or more) of solid supported reagent. I think that the companies getting into flow apparatus should keep these in mind. That's because you're going to use these things up, eventually, and the companies involved will be able to sell you more. ("Give 'em the razor and sell 'em the blades", as King Gillette said). All sorts of chemistry might fall into this category - reductive aminations are the first thing that come to mind from a med-chem perspective. All sorts of reactions with nasty workups are candidates for this sort of approach.

But there's a catch, the dirty secret of flow chemistry from my experience so far: you know how we medicinal chemists sometimes have trouble making soluble compounds? Well, brace yourselves when you go with the flow reactors, because you're going to be clogging things up left and right. Any flow apparatus that does not take this into account should be regarded with suspicion: "easy to clean out" is a very desirable quality. Things have to be run more dilute than you think they do, and in stronger solvents. That can mean trouble on the back end, with more (and more difficult) solvents to get rid of in the isolation.

If anyone out there is also involved in the flow world and can talk about it, I'd be glad to hear some experiences. For bench-scale medicinal chemistry, the field is still in its early days, and there are lot of things that haven't been tried yet.

Comments (16) + TrackBacks (0) | Category: Life in the Drug Labs

November 4, 2008

We Interrupt This Science. . .For Some Politics

Email This Entry

Posted by Derek

Election day. I’ve had a lot of requests from people who want to know how I’m going to vote. Before I started blogging, my reply to that was usually a variation on “None of your business”, but then I got into the sideline of telling people my opinions on things every working day. So that answer won’t do.

But what answer will, this year? My political leanings are, I think, fairly clear to anyone who’s read this blog for a while. Economically, I’m a capitalist, for sure. I believe that wealth can most certainly be created, most effectively through human creativity. I would prefer that people be allowed to keep as much of the fruits of their labors as possible, to do with as they wish. I’m a free-trader as well. Tariffs and subsidies do not make me happy. I believe in Adam Smith’s invisible hand, and in comparative advantage, which is why I continue to defend outsourcing even as it takes away jobs in my own industry, in my own country. I think that Schumpeter was right about creative destruction, but he never said it was fun.

In public and social policy, I believe that there should be strong, enforced laws at the limits of behavior – but I try to set those limits fairly wide, out at the “as long as you’re not harming anyone else” line. I think that inside that boundary people should be allowed to do as they damned well please, even if the results don’t please me much. Often, they don’t – but my tastes are not a matter of law. I’m not religious at all, so I feel no need to enforce what I might see as God’s will on anyone. My first (but not sole) requirement for my tolerance of someone else’s religious beliefs is whether they can stand me not sharing them. Not everyone can.

And as for elections, well, I have a low opinion of politicians in general. I realize that this is unfair to the elected officials who are genuinely hard-working public servants, but those people are rather thin on the ground. Ah, politics: watching the game played while growing up in Arkansas did me a lot of good. And studying history has given me no reason to think the game has ever changed. Why should it? Human nature hasn’t. (Any political scheme that proposes to change that should cause you to flee at all speed). No, people are what they are, and the best of them simply don’t go into politics, as a rule.

So, in a President, I’m not looking for charisma or charm – in fact, I rather fear both qualities. I’d like to see enough eloquence to keep someone out of the laughingstock category, but no more, if possible. In general, I’m not looking for someone whose appeal is based on looking good on TV. (Unfortunately for my opinions, our current system for picking presidents largely values the opposites of all these). As for intelligence, I’m looking for someone smart enough to pick advisors who are smarter and more capable than they are themselves. But feeling so smart that you think you’re actually on top of what’s going on is a recipe for disaster. No one at that level is master of events, or really even of their own fate.

All this said, I can’t say that I’m very thrilled with the prospect of either presidential candidate this year – nor is this the first election during which I’ve had that feeling. My economic preferences would tend to make me more Republican – but our current Republican president has spent money like pouring water on the ground, so what does that avail me? I agree with McCain more than Obama on foreign policy, but his statements on the current economic mess have been, to my mind, disgraceful. But then again, Obama’s have been disgraceful, too, as far as I’m concerned. Of course, one has to get elected, and to get elected one has to run around spouting all kinds of nonsense. I learned from my father to watch their hands, not their mouths: actions over words. But McCain’s actions are hard to predict, and as for Obama, someone who came up through Chicago politics is probably capable of things that would even raise the eyebrows of a guy from Arkansas.

I find no comfort further down the ticket. Sarah Palin has not shown herself, to my mind, as qualified to be president. I appreciate the fact that many didn’t think that Harry Truman was, either, and I realize that we’ve gone through several periods where the VP would probably have been disastrous if called on to serve (think Spiro Agnew, John Nance Garner). But no, while I understand the political reasons why McCain chose Palin, I think the choice reflects poorly on him in a larger sense. But on the other side of the ballot, Joe Biden often seems to me like the worst sort of blowhard hack, the walking embodiment of almost everything I can’t stand about national politicians. (Charles Schumer narrowly takes my prize in that category, in case you’re wondering). No, choosing Biden tells me nothing good about Obama.

I think it would do the Republicans a lot of good to be thoroughly out of power for a while, although the thought of Sarah Palin as a rising star in the party is not encouraging. But I think that having the Presidency and both houses of Congress will tend to bring out the worst in the Democrats. What to do? Whatever I do, it’s mostly going to be an exercise for my own conscience. I now live in Massachusetts; Obama will take this state even if an asteroid hits. Back in 1992, I spent so long in the polling booth that people were rattling the door as if it were a public restroom. Bush (Sr.), Bill Clinton, Ross Perot – I kept looking at the names, and finally realized that I couldn’t vote for any of them (admittedly, it took the least time for me to eliminate Perot). I finally voted Libertarian, in the serene hope and confidence that they would not win. But I'm not sure I can run that trick on myself again this year. . .

Update: this is why I generally don't write about politics - this post was no fun to write, and it's probably not much fun to read, either. Be assured that I'm not planningn to take the blog in this direction more than once every few years - the internet is full of political opinions, and doesn't need any more from me. Back to science tomorrow, I promise!

Comments (41) + TrackBacks (0) | Category: Current Events

November 3, 2008

Pfizer: Strategy, Layoffs, and Money

Email This Entry

Posted by Derek

I shouldn't pick on Jeff Kindler, because I wouldn't want to be CEO of Pfizer, not the least little bit. But he gave an interview recently to the Financial Times, who asked him (naturally) about the Lipitor patent expiration. His answer:

We're facing a very significant loss of exclusivity in Lipitor at the end of 2011. We have a clear plan for positioning the company for strong, profitable growth after that. That plan consists of pursuing significant new opportunities for increased revenues starting with our internal pipeline, getting further growth out of our existing products, growing in the emerging markets, growing our business on off-patent products. We sell billions of dollars of off-patent products and in many parts of the world that's the most important opportunity to meet unmet medical needs and looking for other potential sources of revenues.

I realize that this is the only sort of answer that he could have given, and the only sort that the FT could have expected. But, still. Distill it down, and you have, basically, "We're going to get around losing all that Lipitor revenue by making more money on all our other stuff". Good to hear that, but I'm still not running out to buy any Pfizer stock.

And while we're on the subject of Pfizer, the layoffs there continue to grind on, from what I'm hearing. I don't think that people have quite heard yet if they're staying or going, but I assume that that will happen in time to give everyone a festive Thanksgiving season. In general, it sounds like the company is heading even further down the path of higher associate/PhD ratio that they announced a couple of years ago, with a lot of outsourcing in the mix as well.

But here's a question: how many of the people who will be laid off are people that Pfizer, at great expense, paid to move to Groton from Ann Arbor? Surely there will be a good number in that category, and they've just barely settled down in Connecticut by now. Pfizer's relocation seems to have been pretty generous - picking up property value differences on house sales and the like - and all for this?

Comments (23) + TrackBacks (0) | Category: Business and Markets