Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

July 31, 2008

Rember for Alzheimer's: Methylene Blue's Comeback

Email This Entry

Posted by Derek

Today we take up the extremely interesting story of Rember, hailed in this week’s press as a potential wonder drug for Alzheimer’s. There are a lot of unusual features to this one.

To take the most obvious first, the Phase II data seem to have been impressive. It’s hard to show decent efficacy in an Alzheimer’s trial – you can ask Wyeth and Elan about that, although it’s a sore subject with them. But Rember, according to reports (this is the best I've seen), was significantly more effective than the current standard of care (Aricept/donezepil, a cholinesterase inhibitor). In light of some of the more breathless news stories, though, it’s worth keeping in mind that this was efficacy in slowing the rate of decline – not stopping it, and certainly not reversing it. Especially in the later stages of the disease, it’s extremely hard to imagine reversing the sort of damage that Alzheimer’s does to the brain (and yes, I know about the TNF-alpha reports – that subject is coming in a post next week). If Rember is twice as effective as Aricept, that's great - except Aricept's efficacy has never been all that impressive.

But that's still something, considering how the drug is supposed to work. Its target is different than the usual Alzheimer’s therapy. Accumulation of amyloid protein has long been suspected as the cause of the disease, but there have always been partisans for another pathology, the neurofibrillary tangles associated with tau protein. Arguments have been going on for years – decades – about which of these has more to do with the underlying cause(s) of Alzheimer’s. Rember is the first clinical shot (that I’m aware of) at targeting tau. If the first attempt manages to show such interesting results, it’s a strong argument that tau must be important. (Other people are working in this area, too, of course, but my impression is that it's nowhere near as many as work on amyloid).

That’s food for thought, considering the amount of time and effort that’s been expending on amyloid. It may be that both pathologies are worth targeting, or it may even be that these results with Rember are a fluke. But it’s also possible that tau is really the place to be, in which case the amyloid hypothesis will take its place in the medical histories as a gigantic dead end. I’m not quite ready to bet that way myself, but it’s definitely not something that can be ruled out. I wouldn’t put all my money on amyloid either, at this point. (Boy, am I glad I'm not still working in Alzheimer's: this sort of stuff is wonderful to watch from the outside, but from the inside it's hard to deal with).

Now, what about the drug itself? It’s coming from a small company called TauRx, whose unimpressive web site just went up recently. The underlying science (and the clinical data) all come from Dr. Claude Wischik of the University of Aberdeen, who has so far not published anything on the drug. The presentation this week has, by far, been the most that anyone’s seen of it (papers are said to be in the works).

And Rember itself is. . .well, it’s methylene blue. Now there’s an interesting development. Methylene blue has been around forever, used for urinary tract infections, malaria, and all sorts of things, up to treating protozoal infections in fish tanks. (For that matter, it’s turned up over the years as a surreptitious additive to blueberry pies and the like, turning the unsuspecting consumer’s urine greenish/blue, generally to their great alarm: a storied med school prank from the old days). What on earth is it doing for tau protein?

According to TauRx, the problem is that the aggregation of tau protein is autocatalytic: once it gets going, it's a cascade. They believe that methylene blue disrupts the aggregation, and even helps to dissociate existing aggregates. Once they're out in their monomeric forms, the helical tau fragments are degraded normally again, and the whole tau backup starts to clear out.

Now for another issue: there's been some commentary to the effect that Rember can't possibly make anyone any money, because it's a known compound. Au contraire. While we evil pharmaceutical folks would much rather have proprietary chemical matter, there are plenty of other inventive steps worth a patent. For one thing, I suspect that formulation will be a challenge here (and that Medpage story seems to bear this out). I doubt if methylene blue crosses the blood-brain barrier so wonderfully, and I also believe that it's cleared pretty well (thus that green urine). So TauRx had to dose three times a day, and their highest dose didn't seem to work, probably because of absorption issues. (That's also going to lead to gastrointestinal trouble). So formulating this ancient stuff so it'll actually work well could be a real challenge: t.i.d with diarrhea is not the ideal dosing profile for an Alzheimer's therapy, to put it mildly.

And for another, there's always mechanism of action. I deeply dislike patent claims that try to grab hold of an entire area, but there's so much prior art in tau that no one could try it. But use of a specific compound (or group of compounds) for a specific therapy: oh, yes indeed. It's a complicated area, and the law varies between Europe and the US, but it definitely can be done. The people who say that this can't be patented should check out the issued patents US7335505 or US6953794. Or patent applications US20070191352, WO2007110627, WO2007110629, and WO2007110630. There you go; that wasn't hard. Mind you, there might be some prior art for using such compounds as cognition-improving agents: I'd start here if I were in the business of looking into that sort of thing.

Finally, is methylene blue (or some derivative thereof) actually going to be a reasonable drug? There's that dosing problem, for one thing, but the long history in humans is encouraging (and is a key part of TauRx's hopes not to spend so much money on toxicity testing in the clinic - talks with the FDA should be starting soon). There have been contradictory reports (plus, minus) on the effects of the compound on the brain in general, though, so they may have to do more work than they're planning on. All in all, a fascinating story.

Comments (116) + TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | Patents and IP | Regulatory Affairs

July 30, 2008

Bapineuzumab: Good For Anything or Not?

Email This Entry

Posted by Derek

Note: I'm still working my way through the information on the much-hyped TauRx drug, Rember - a post on that is coming. Here's more from the same Alzheimer's meeting, though:

Elan and Wyeth unveiled the data on their widely anticipated Alzheimer’s drug bapineuzumab yesterday. This is another antibody from Elan’s shop, part of a long-running effort to induce an immune response to the amyloid protein which is thought to be a key player in the development of disease. And. . .well, this is an Alzheimer’s drug. That means it comes with all the standard baggage: it’s trying to treat an extremely difficult disease that we don’t understand very well, by a mechanism that no one can be sure will work or is even relevant. (Cue up this discussion from last week around here!)

This drug was always expected to have its best chance of working in patients without the APOE4 mutation, a lipoprotein which was identified in the 1990s as a significant risk factor for Alzheimer’s. Update: I shouldn't have used "always" there, since this was picked up during Phase II. But that shows that Wyeth and Elan did have it in mind as something to look for. The Phase III trials will, in fact, be stratified according to APOE4 status. And so it did – but not as dramatically as everyone had been hoping. About one-third of Alzheimer’s patients lack the APOE4 mutation, and this cohort showed slower decline in their brain functions with bapineuzumab treatment. But how much slower? The trial used a standard survey scale (ADAS-COG) – on that one, the existing Alzheimer’s drugs (Aricept, e.g.) show at most a 3-point effect, while bapineuzumab showed a five-point change.

That’s probably real, but I’m not sure how much that’s going to mean in the real world, and it’s certainly less than one would want. On top of that, the drug showed little or no benefit (and more side effects) in the two-thirds of the patients who have the APOE4 alleles, which meant that when all patients in the trial were taken together, improvement over placebo didn’t reach significance. And since this trial doesn’t seem to have been designed from the start to distinguish between those different patient groups, that’s the only number that you can take away with any certainty. All the other analyses are ex post facto, and thus carry less weight.

Investors, some of whom were clearly expecting a lot more than this, have not reacted well to the news: Elan’s drop has been taking the whole Irish stock exchange down along with it today. They have several other Alzheimer’s therapies in development, but the worries are starting to develop about the effectiveness of all the approaches that target amyloid. You can see some of those concerns being aired out in the latter half of the Forbes article. Some of the stronger statements are from people who are backing alternate hypotheses, which you should keep in mind, but there’s no doubt that the amyloid hypothesis for Alzheimer’s is still very much unproven. (Perhaps Lilly can shed some light today, but I doubt it, to tell you the truth). It’s going to be a long time before we can stop using that disclaimer that I had in the first paragraph.

Comments (3) + TrackBacks (0) | Category: Alzheimer's Disease

July 29, 2008

Iloperidone: A Schizophrenia Drug Goes Down For the Last Time

Email This Entry

Posted by Derek

I've talked about a lot of difficult therapeutic areas, but here's another boulevard of broken dreams: schizophrenia drugs. I was working on follow-ups to a promising clincial candidate, which has since been promising a number of times without ever delivering. It certainly missed its endpoints in schizophrenia by a mile in Phase II. That was actually my introduction to the drug industry back in 1989 - I followed that up with several years working on Alzheimer's, another notorious graveyard of good ideas, which makes me wonder why I didn't just quit at some point and open that chain of all-you-can-eat catfish restaurants that the Northeast so desperately needs.

Of course, once in a while a drug for dementia actually works a bit, and since there's a huge underserved market out there, it's a prize worth seeking (ask Lilly or J&J). But clinical success rates are absolutely horrific in the whole CNS area, and the latest company to demonstrate this is Vanda Pharmaceuticals in Maryland (I've always wondered if they're named after a spectacular, and spectacularly finicky, genus of orchid).

Vanda's drug iloperidone has been kicking around for years now. Hoechst Marion Roussel (now Aventis) seems to have discovered it in the early 1990s, and they, Novartis, and Titan have all handed it off to someone else over the years. Vanda was the last in line, but they got the dreaded "Not Approvable" letter from the FDA yesterday, and the company's stock was blitzed, down 73 per cent at the close. And the thing is, this drug got a lot closer than anything I used to work on. Vanda did hit their endpoints against placebo and against haloperidol, but the problem is, these are not necessarily the standard of care in schizophrenia:

" The FDA stated that Vanda had demonstrated the effectiveness of iloperidone at 24 mg/day in the 3101 study for which the company reported results in December, 2006, and that the efficacy was similar to the active comparator, ziprasidone (Geodon(R), Pfizer Inc.). In addition, the FDA also stated that iloperidone was superior to placebo in patients with schizophrenia at doses of 12-16 mg/day and 20-24 mg/day in a prior study. However, the FDA expressed concern about the efficacy of iloperidone in patients with schizophrenia relative to the active comparator, risperidone (Risperdal(R), Johnson & Johnson), used in prior studies. The FDA indicated that it would require an additional trial comparing iloperidone to placebo and including an active comparator such as olanzapine (Zyprexa(R), Eli Lilly & Company) or risperidone in patients with schizophrenia to demonstrate the compound's efficacy further. The FDA also stated that it would require Vanda to obtain additional safety data for patients at a dose range of 20 to 24 mg/day."

So iloperidone works, but quite possibly not well enough compared to what's already on the market. That alone won't quite sink your drug - you can always hunt for a patient cohort that benefits from a new compound, and you'll quite likely be able to find one if you have the resources. But as that last line mentions, there are additional safety concerns.

Reading between the lines, it would appear that iloperidone had the best chance of distinguishing itself in efficacy at the higher doses, but that the FDA wanted to make sure that side effects didn't start kicking in up there. This paper makes you wonder if one problem is the (dreaded) QT interval prolongation. Many other factors have looked relatively clean in some of the reported trials.

I greatly doubt if we'll see iloperidone surface again. Vanda wouldn't seem to have the resources, and too many other organizations have passed on it. At this point, it's hard to see why more money would be put into the compound. . .

Comments (12) + TrackBacks (0) | Category: Business and Markets | Clinical Trials | The Central Nervous System

July 28, 2008

Questions You Don't Necessarily Want the Answers To . . .

Email This Entry

Posted by Derek

1. “Hey, who dropped that condenser out on the floor in front of my hood? That looks just like the one I had on my reaction flask. . .”

2. “How come the toxicology people haven’t called me about our lead compound yet? Two-week tox finished a while ago, and usually they’re a lot faster than this. . . “

3. “Is there any active aluminum compound left in this reaction or what? I keep dripping methanol into it to quench it, and nothing’s going on at all so far. . .”

4. “Who’s going to scale up our candidate compound, anyway? We need 300 grams of the stuff, and the scale-up group is booked solid. . .”

5. “So, is this the high-pressure hydrogen line or the low-pressure one that I’m opening?”

6. “I wonder what the error bars are on that behavioral assay. . .”

Comments (30) + TrackBacks (0) | Category: Life in the Drug Labs

July 25, 2008

Should Genentech Be a Part of Roche?

Email This Entry

Posted by Derek

Now for some belated Roche/Genentech comments: the first thing that I found surprising about this was that there was some surprise involved. Even though a move to buy the rest of Genentech has always been a possibility, the actual timing of the announcement seems to have been unexpected out in South San Francisco. But it probably had to be that way.

What alternative was there? Roche wouldn’t have made some announcement along the lines of “You know, we’re thinking about buying the rest of Genentech sometime pretty soon”, because that would have made the deal even more expensive as everyone piled into the stock. Regulation FD would mean that they really couldn’t give a heads-up to Genentech’s management without telling the world – after all, these are two separate companies, so it’s not an internal matter. So this had to be done just like any company making a bid for any other – with the difference being that Roche already had a head start on a majority of Genentech.

The second thing that occurred to me, though, was “why, and why now?” The first half of that question is answered, as are most “I wonder how come. . .” queries are, with the word “money”. Genentech has been coining the stuff, and Roche would like to have all that revenue instead of just part of it. “Why now” comes down to money, too. The two companies were due to renegotiate their revenue sharing in 2015, and Roche apparently decided (among other factors) that the US dollar was about as cheap as it was going to get. You could turn the question around and ask why Roche took the whole don't-own-it-all approach in the first place. (They did own it all for a while, but put Genentech back out into the market in 1999).

I always assumed that they were worried about messing up whatever it was that had Genentech doing so well in the first place. If true, that showed an admirable level of self-knowledge on Roche’s part. Too many other companies seem to assume that the outfits that they buy will be just fine under the new letterhead – even better, probably, now that they’ve been bought by such a fine bunch of people! But it certainly doesn’t always work out that way. The challenge is to keep the acquired company, and its culture, from dissolving into the larger one like a sugar cube. (The alternative is to just buy companies for their physical or IP assets, not giving a hoot for who might be working there, and we’ve seen plenty of that, too).

But Genentech is a mighty big sugar cube, and it’s a long way from the rest of Roche’s operations. I’d guess that the folks in Basel are planning (hoping) that Genentech will go on just like it has, just with a few accounting adjustments (like all the money ending up on Roche’s books). There are probably a lot of reassuring messages going out to the Genentech people about how gosh, we already had a majority interest in you, so this is just sort of a formality, and it’ll probably save lots of money besides, you know, so just keep right on doing what you’re doing. . .

We’ll see. The Swiss are not known for their delicate managerial touch. One solution that's been talked about would be (once Roche has Avastin et al. safely booked) for them to spin out a new version of Genentech as a publicly traded company again - 1999 all over again. And we'll see if Genentech even goes for the offer - there's a lot of doubt about that, at least at the price the Roche is offering. They've apparently opted out of the provisions in the 1999 agreement about how Roche could buy them, so all sorts of things are now possible. . .

Comments (27) + TrackBacks (0) | Category: Business and Markets

July 24, 2008

Confident

Email This Entry

Posted by Derek

I’m going to expand on one of the points brought up yesterday, about the reported drug industry executive who was confident that his company’s Alzheimer’s therapy was ready to go out and make billions of dollars. It was that word “confident” that set me off, I think.

Because that’s not a word that you hear much of in this industry. The strongest form that you’ll come across is something like “fairly confident”, which is how you feel when you send in a compound that’s a minor change off something that’s already active, or how you feel about screening a target that’s a close homologue of something you already have plenty of ligands for. You can be pretty sure in those cases that something’s going to hit – but you’ll note that both of those are pretty far upstream in the drug discovery process. As you move toward animals, that confidence begins to look pretty ragged, and depending on the disease, it can just flat-out evaporate.

Despite all our efforts to avoid the expensive little beasts, there is still no way to be sure about how your compound is going to act in an animal until you’ve put it into an animal. That goes for predicting its peak blood levels, its half-life, its metabolites, and the duration and degree of its efficacy. You can have your compounds all ranked in order of how you think they’ll perform, and that list will, every time, be reordered after a first round of animal testing.

And when you go further, you really have no idea. As I’ve said here before, if you don’t cross your fingers when you take a compound into two-week toxicity testing, you haven’t been doing this stuff very long. Despite all efforts to avoid this expensive step, two-week (and four-week and longer) tox testing in animals will always, always tell you things you didn’t know. (Most of the time it’ll tell you things you didn’t particularly want to hear). No one worth their salary will ever use the adjective “confident” before the first multiweek tox data come in.

So much for animals: how about people? Well, despite all our efforts, there are still surprises in Phase I dosing, the tip-toe clinical stage where you look for blood levels in healthy volunteers. The animal pharmacokinetic data tell you where to start the doses in humans, but you can still get ambushed. I worked on a receptor agonist project once where the human blood levels came back at just about 10% of what we’d predicted, so back to the drawing board we went. No, I’ve never heard anyone describe themselves as “confident” before Phase I.

And that’s an easy step compared to Phase II, where for the first time you put your drug into sick patients. The failure rate in Phase II is just abominable, and stands as an indictment of just how little we understand about the biochemistry of human disease and how to modify it. When you consider a central nervous system disease like Alzheimer's, the source of the "confident" quote that started this digression, the failure rate is over 90%. Our understanding of the causes and progression of Alzheimer's is very poor. That's as opposed to a more well-worked-out condition like, say, hypertension, where our understanding is merely quite inadequate.

But if you make it through that fine sieve, you move on to Phase III, a larger and more real-world look at the patient population. If your Phase II trial was designed to provide a robust test, rather than just to make you and your investors feel good, you can hope that your Phase III will work out. But the whole time it's going on, the prudent drug developer will remember that the biggest, most well-funded, and most competent research organizations in the world have all taken huge cratering dives in Phase III. You know a lot more about your compound by this stage, so these disasters don't happen as often - but that means that when they do, they rise right up out of the floor in front of you. No, you can feel better by Phase III, but "confident" is pushing it.

How about when your drug goes to the FDA? Try asking any drug company executive if they'd like to go on record as being "confident" of regulatory approval. And when your drug actually goes to market? Is anyone really confident about those projections from the people in marketing? Pfizer sure talked a good game about Exubera, remember. Don't forget, too, that nasty side effects can always be waiting out there in the larger patient population. Even after your drug goes out and starts earning a living, it can be completely torpedoed at any time. Baycol, Vioxx, Avandia - you can name more.

So that's the story: you can never kick back and relax in this business. For all the perception that some people have of the drug industry as a sure-fire money machine, it sure doesn't look that way from inside. Anyone who describes themselves as "confident" about their new experimental medication is trying to fool their listeners. Or themselves. Maybe both.

Comments (11) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Patents and IP

July 23, 2008

Patents Stopping an Alzheimer's Wonder Drug?

Email This Entry

Posted by Derek

A longtime reader sent along a very interesting example that’s being used in a new book. The Gridlock Economy by Columbia economist Michael Heller is getting some good press, including this interview over at the Wall Street Journal>’s Law Blog. Heller’s thesis is:

“When too many owners control a single resource, cooperation breaks down, wealth disappears and everybody loses.” That is, the gridlock created by too much private ownership is wreaking havoc on our economy and lives. It’s keeping badly needed runways from being built, stifling high-tech innovation, and “costing lives” by keeping groundbreaking drugs from hitting the market.

It’s that last example that caught the eye of my correspondent, and I wanted more details. Fortunately, Heller went on the in the interview to talk about that very case, and I’m going to just quote him on it:

”Here’s a life or death example that’s happening right now: A drug company executive tells me he may have a better Alzheimer’s treatment. But to get FDA approval and bring it to market, he has to license dozens and dozens of patents relevant to testing for safety and side effects. So negotiations fail and the Alzheimer’s drug sits on a shelf, even though my informant is confident it could save countless lives and earn billions of dollars.”

Now, here’s the problem: I’ve actually worked on Alzheimer’s disease myself, and this story does not ring true. I don’t know if Heller’s “informant” is talking about animal testing or clinical trials in humans, but the same points hold in both cases. For one thing, I’m not aware of any patents that have to be licensed to do the standard testing for safety and side effects. There could conceivably be a couple for faster or more convenient tests, but I don’t even know of those. Otherwise, safety testing, in both animals and humans, is (to the best of my knowledge) done pretty much outside the realm of patent considerations. That “dozens and dozens of patents” line seems wildly off to me. I have never heard of a drug (for any disease) that has not advanced due to patent considerations related to safety testing.

Update - and that's partly for a very good legal reason: the safe harbor provisions of the 1984 Hatch-Waxman Act, as reaffirmed in the 2005 Merck v. Integra decision by the Supreme Court. There is specific protection from infringement in the use of a patented compound for purposes of submitting regulatory filings. And the language of the ruling makes it look like it's intended to cover all sorts of patented technologies as well.

Second, it’s important to remember that efficacy testing comes after safety, at least when you get to humans. So this contact of Heller’s is talking about a drug that has not been evaluated in humans for either quality, but he’s still “confident it could save countless lives and earn billions of dollars”. Right – for Alzheimer’s, where you have to worry about human brain levels, where we’re still arguing about what even causes the whole disease, where the clinical trials take years because the deterioration is so slow. Professor Heller is being had.

And let’s stipulate that there are, somehow, enough convincing data to make a reasonable observer confident that said drug would go on to earn billions of dollars. (There is never enough information to completely convince anyone of that in this industry before a drug hits the market, but let’s pretend that there is). In that case, those mysterious patent negotiations would not fail. Some sort of agreement would be reached, with money like that on the table.

The problem with Heller using this example is that there are indeed a lot of problems and potential problems with intellectual property in the drug industry. (I’ve talked about a few of them here). It’s a big, important, complicated, topic – and for all I know, it gets a good treatment in Heller’s book. (I’ll read it and find out). But this cartoon of an example is going to confuse anyone outside the field, and irritate anyone inside it.

Comments (24) + TrackBacks (0) | Category: Alzheimer's Disease | Patents and IP

July 22, 2008

Vytorin: Another Round of Nasty Results

Email This Entry

Posted by Derek

Merck took the unusual step of delaying its earnings release yesterday until after the close of the market. A report on another clinical study of Vytorin (ezetimibe), their drug with Schering-Plough, was coming out, so they put the numbers on hold until after the press release yesterday afternoon. Naturally, this led to a lot of speculation about what was going on. A conspiracy-minded website vastly unfriendly to Schering-Plough suspected some sort of elaborate ruse to drum up publicity.

But that sort of thinking doesn't take you very far, unless you count the distance you rack up going around in circles. As it turned out, the SEAS trial (Simvastatin and Ezetimibe in Aortic Stenosis) was, in fact, very bad publicity indeed for the drug and for both companies. In fact, a real conspiracy would have made sure that these numbers never saw the light of day, or were at least released at 6 PM on a Friday. But no, the spotlight was on them good and proper.

This trial studied patients with chronic aortic stenosis, which is a different condition than classic atherosclerosis. The two have enough similarities, though, that there has been much interest in whether statin treatment could be effective. The primary endpoint, a composite of aortic valve and general cardiovascular events, was missed. Vytorin was no better than placebo. It reached significance against one secondary endpoint, reducing the risk of various ischemic events, but not in any dramatic fashion.

That's not necessarily a surprise, since there's not a well-established therapy for aortic stenosis (thus the trial design versus placebo). As several commenters to the conference call after the press conference pointed out, this shouldn't change clinical practice much at all. But it's not what Merck and Schering-Plough needed to hear, that's for sure, because the sound bite will be "Vytorin Fails Again".

Actually, the sound bite will be even worse than that. There are a lot of headlines this morning about another observation from the SEAS trial: that significantly more patients in the treatment arm of the study were diagnosed with cancer. That's a red warning light, for sure, but in this case we have at least some data to decide how much of one.

For one thing, as far as I know there have been no reports of increased cancer among the patients taking Vytorin out in the marketplace - of course, one could argue that this might have been missed, but if the effect were as large as seen in the SEAS study, I don't think it would have been. Analyses of the earlier Vytorin trials and the ongoing IMPROVE-IT trial versus Zocor have also shown no cancer risk, and the latter trial is continuing. So for now, it would appear that either this was a nasty result by chance, or (a longer shot) that there's something different about the aortic stenosis patients that leads to major trouble with Vytorin.

None of these scientific and statistical arguments, and I mean none of them, will avail Schering-Plough and Merck. Among people who've heard of Vytorin at all, the first thing that will come to mind is "doesn't work", and after today's headlines, the second thing that will come to mind is "cancer". Just what you want, to put out press releases that your compound, even though it failed to work again, isn't actually a cancer risk. You really couldn't do worse; a gang of saboteurs couldn't have done worse. Of course, there's no such gang: the companies themselves authorized these trials, thinking that there were home runs to be hit. But all these sidelines - familial hypercholesteremia, aortic stenosis - have only sown fear, confusion, and doubt. The only thing that I can see rescuing Vytorin as a useful drug is for the IMPROVE-IT results to show really robust efficacy in its real-world patients. And I wonder if even that could be enough.

Comments (19) + TrackBacks (0) | Category: Business and Markets | Cancer | Cardiovascular Disease | Clinical Trials | Toxicology

July 21, 2008

Backtracking, Necessary and Unnecessary

Email This Entry

Posted by Derek

One of the things that no one realizes about research (until they’ve done some) is how much time can be spent going back over things. Right now I’m fighting some experiments that should be working, have worked in the past, but have (for some reason) decided not to work at the moment. Irritating stuff. There’s a reason buried in there somewhere, and when I find it things will be that much more robust in the future, but I’d hoped that they were that solid already.

And across the hall, a check is going on of some screening hits. When you get a pile of fresh high-throughput screening data, including some fine-looking starting compounds for a new project, what do you do with it? Well, if you have some experience, the first thing you do is order up fresh samples of all the things you could possibly be interested in, and check every single one of them to make sure that they actually are what they say on the label. Don’t start any major efforts until this is finished.

In fact, you should order up solid samples from the archives along with some of the DMSO stock solution that they used in the screening assay. They might not be the same, not any more. False negatives and false positives are waiting in your data set, depend on it: compounds that should have hit, but didn’t because they decomposed in solution, and compounds that (sad to say) did hit only because they decomposed in solution. You’ll probably never know about the first group, and you can waste large amounts of time on the second unless you check them now.

Getting a project going, then, can seem like trying to get a dozen nine-year-olds into a van for a long trip. Someone’s always popping out again, having forgotten something, which reminds someone else, and your scheduled departure time arrives with everyone running in circles around the driveway.

But nine-year olds can eventually be corralled, as can the variables in most scientific projects. But not always. Where you don’t want to be is the situation people had with the early vacuum-tube computers. Vacuum tubes have not-insignificant failure rates. So if you have, say, twenty thousand of the little gizmos in your ENIAC or whatever, doing the math on mean-time-between-failures shows you that the thing can run for maybe forty-five minutes before blowing a tube (unless you take heroic measures). And the more vacuum tubes you have, the worse the problem gets: make your computer big enough, and it’ll blow right after you throw the switch, every time.

So that’s the other thing you have to watch when troubleshooting: try to make sure that your problems aren’t built into the very structure of what you’re trying to do. In med-chem projects, look out for statements like “we have all the activity we need, now we just need to get past the blood-brain barrier”. Sometimes there’s a way out of those tight spots, but too often the properties that (for example) could get your compound into the brain are just flat incompatible with the ones that gave you that activity in your assay. You’d have been better off approaching that combination the other way around, and better off realizing that months ago. . .

Comments (8) + TrackBacks (0) | Category: Life in the Drug Labs

July 18, 2008

Lowe's Law of Diurnal Distribution

Email This Entry

Posted by Derek

Here’s an appropriate topic for a Friday, although at first many of you may think I’ve lost my mind. What would happen if you combed the full text of the experimental sections of the chemistry journals, looking for how long people ran their reactions?

I’m pretty sure that I know what you’d see: there would be a lot of scatter in the short time periods, with some peaks at the various half-hour and hour marks just for convenience. But as you went out into the multiple-hour procedures, I feel sure that you’d see pronounced spikes in the data at around sixteen to twenty hours and again at around 72 hours.

Some readers have doubtless started nodding their heads, having done the math. Those times correspond to "overnight" and "over the weekend", and I'm willing to bet that they're over-represented (and how) in the data set. I'll go on to predict scarce examples in, say, the 14-hour or 38-hour ranges - there's not much way to run a reaction for those intervals and not be in the lab too early in the morning or too late at night.

A second-order prediction is that when such reactions are found, that their origins will skew heavily toward academia rather than industry. And I'm also willing to bet that patent procedures will tend to follow the working-day timelines more than the general literature, for the same reasons. My last higher-order prediction is that the reaction times would not, in fact, obey Benford's Law, as many other data sets of this kind do.

As far as I know, no one's ever done this sort of analysis, but I suppose it would be possible, especially for someone at Chemical Abstracts or at one of the scientific publishers. If someone wants to try it, please let me know what comes out. And if the results follow my predictions, please feel free to refer to the title of this post or something similar. I won't object.

Comments (31) + TrackBacks (0) | Category: Academia (vs. Industry) | Life in the Drug Labs | The Scientific Literature

July 16, 2008

Receptors: Can't Live With 'Em, Can't Understand 'Em

Email This Entry

Posted by Derek

At various points in my drug discovery career, I’ve worked on G-protein-coupled receptor (GPCR) targets. Most everyone in the drug industry has at some point – a significant fraction of the known drugs work through them, even though we have a heck of a time knowing what their structures are like.

For those outside the field, GPCRs are a ubiquitous mode of signaling between the interior of a cell and what’s going on outside it, which accounts for the hundreds of different types of the things. They’re all large proteins that sit in the cell membrane, looped around so that some of their surfaces are on the outside and some poke through to the inside. The outside folds have a defined binding site for some particular ligand - a small molecule or protein – and the inside surfaces interact with a variety of other signaling proteins, first among them being the G-proteins of the name. When a receptor’s ligand binds from the outside, that sets off some sort of big shape change. The protein’s coils slide and shift around in response, which changes its exposed surfaces and binding patterns on the inside face. Suddenly different proteins are bound and released there, which sets off the various chemical signaling cascades inside the cell.

The reason we like GPCRs is that many of them have binding sites for small molecules, like the neurotransmitters. Dopamine, serotonin, acetylcholine – these are molecules that medicinal chemists can really get their hands around. The receptors that bind whole other proteins as external ligands are definitely a tougher bunch to work with, but we’ve still found many small molecules that will interact with some of them.

Naturally, there are at least two modes of signaling a GPCR can engage in: on and off. A ligand that comes in and sets off the intracellular signaling is called an agonist, and one that binds but doesn’t set off those signals is called an antagonist. Antagonist molecules will also gum up the works and block agonists from doing their things. We have an easier time making those, naturally, since there are dozens of ways to mess up a process compared to the ways there are of running it correctly!

Now, when I was first working in the GPCR field almost twenty years ago, it was reasonably straightforward. You had your agonists and you had your antagonists – well, OK, there were those irritating partial agonists, true. Those things set off the desired cellular signal, but never at the levels that a full agonist would, for some reason. And there were a lot of odd behaviors that no one quite knew how to explain, but we tried to not let those bother us.

These days, it’s become clear that GPCRs are not so simple. There appear to be some, for example, whose default setting is “on”, with no agonist needed. People are still arguing about how many receptors do this in the wild, but there seems little doubt that it does go on. These constituitively active receptors can be turned off, though, by the binding of some ligands, which are known as inverse agonists, and there are others, good old antagonists, that can block the action of the inverse agonists. Figuring out which receptors do this sort of thing - and which drugs - is a full time job for a lot of people.

It’s also been appreciated in recent years that GPCRs don’t just float around by themselves on the cell surface. Many of them interact with other nearby receptors, binding side-by-side with them, and their activities can vary depending on the environment they’re in. The search is on for compounds that will recognize receptor dimers over the good ol’ monomeric forms, and the search is also on for figuring out what those will do once we have them. To add to the fun, these various dimers can be with other receptors of their own kind (homodimers) or with totally different ones, some from different families entirely (heterodimers). This area of research is definitely heating up.

And recently, I came across a paper which looked at how a standard GPCR can respond differently to an agonist depending on where it's located in the membrane. We're starting to understand how heterogeneous the lipids in that membrane are, and that receptors can move from one domain to another depending on what's binding to them (either on their outside or inside faces). The techniques to study this kind of thing are not trivial, to put it mildly, and we're only just getting started on figuring out what's going on out there in the real world in real time. Doubtless many bizarre surprises await.

So, once again, the "nothing is simple" rule prevails. This kind of thing is why I can't completely succumb to the gloom that sometimes spreads over the industry. There's just so much that we don't know, and so much to work on, and so many people that need what we're trying to discover, that I can't believe that the whole enterprise is in as much trouble as (sometimes) it seems. . .

Comments (20) + TrackBacks (0) | Category: Biological News | Drug Assays

July 15, 2008

Metabolic Hope Springs Eternal

Email This Entry

Posted by Derek

Now, if I were still doing metabolic disease work, I'd be all over this target: CAMKK2, which is mercifully short for "Ca2+/calmodulin-dependent protein kinase kinase 2". (Kinase nomenclature has been out of hand for years, in case you're wondering).

CAMKK2 is right in the middle of a lot of pathways that are known to be important for regulation of appetite and glucose levels, namely ghrelin, AMPK, and NPY. These have been rather hard to approach directly with small molecules, or (in the case of NPY) hitting them hasn't been enough by itself. That's the problem with a lot of potential therapies for obesity, as I've mentioned here before. As a behavior, eating is full of overlapping backup redundant pathways, since we're all descendants of creatures that ate whatever they could, whenever they could. The ones whose feeding could be easily shut down or interrupted didn't make it this far.

So even though the field is littered with things that haven't worked out, perhaps a target like this, which seems to be more upstream, might have a better chance of success. We're definitely going to find out. Given the number of companies interested in this area, and the number with kinase expertise, someone's going to be able to take a good swing at this one. The benefits might go beyond weight loss - animals given a known inhibitor (STO-609, a Sumitomo compound) were also resistant to the bad effects of a high-fat diet, putting on less weight than controls and showing better glucose control.

Of course, the fact that Sumitomo had a compound years ago that hits this target so well makes you wonder what ever happened to it. I can't find much about why it didn't progress, but you can be sure that other people are asking that same question right now. . .Update: see this comment for more on this topic. . .

Comments (11) + TrackBacks (0) | Category: Diabetes and Obesity

July 14, 2008

Things I Won't Work With: Cyanogen Azide

Email This Entry

Posted by Derek

Cyanogen bromide is not a nice reagent. It’s not quite on my list of things that I refuse to use, but it’s definitely well up on the list of the ones I’d rather find an alternative to. The stuff is very toxic and very volatile, and reactive as can be.

But it’s not the worst thing in its family. A good candidate for that would be cyanogen azide, which you get by reacting the bromide with good old sodium azide. Good old sodium azide, which is no mean poison itself, will do that with just about any bromide that’s capable of being displaced at all. Azide is one of the Nucleophiles of the Gods, like thiolate anions – if your leaving group doesn’t leave when those things barge in, you need to adjust your thoughts about it. Cyanogen bromide (or chloride) doesn't stand a chance.

Cyanogen azide is trouble right from its empirical formula: CN4, not one hydrogen atom to its name. A molecular weight of 68 means that you’re dealing with a small, lively compound, but when the stuff is 82 per cent nitrogen, you can be sure that it’s yearning to be smaller and livelier still. That’s a common theme in explosives, this longing to return to the gaseous state, and nitrogen-nitrogen bonds are especially known for that spiritual tendency.

There were scattered reports of the compound in the older German and French literature, but since these referred to the isolation of crystalline compounds which did not necessarily blow the lab windows out, they were clearly mistaken. F. D. Marsh at DuPont made the real thing in the 1960s (first report here, follow-up after eight no-doubt-exciting years here). It's a clear oil, not that many people have seen it that state, or at least not for long. Marsh's papers are, most appropriately, well marbled with warnings about how to handle the stuff. It's described as "a colorless oil which detonates with great violence when subjected to mild mechanical, thermal, or electrical shock", and apologies are made for the fact that most of its properties have been determined in dilute solution. For example, its boiling point, the 1972 paper notes dryly, has not been determined. (The person who determined it would have to communicate the data from the afterworld, for one thing).

The experimental section notes several things that the careless researcher might not have thought about. For one thing, you don't want to make more than a 5% solution in nonpolar solvents. Anything higher and you run the risk of having the pure stuff suddenly come out of solution and oil out on the bottom of the flask, and you certainly don't want that. You also don't want to make a solution in anything that's significantly more volatile than the azide, because then the solvent can evaporate on you, making a more concentrated stock below, and you don't want that, either. Finally, you don't want to put any of these solutions in the freezer - a particularly timely warning, since that's one of the first things many people might be tempted to do - because that'll also concentrate the azide as the solvent freezes. And you don't want that. Do you?

Actually, the careless researcher shouldn't even work with cyanogen azide, or anything like it, but you never can tell what fools will get up to. The compound has around a hundred references in the literature, a good percentage of which are theoretical and computational. Most of the others are physical chemistry, studying its decomposition and reactive properties. You do run into a few papers that actually use it as a reagent in synthesis, but I believe that those can be counted on the fingers, which is a good opportunity to remind oneself why they're all still attached.
tetrazine.gif
In fact, the reason I got to thinking about this wonderful little reagent was a recent paper in Angewandte Chemie, which details the preparation of horrible compounds like the one shown. But what does the experimental section spend the most time warning you about? The cyanogen azide used to make them. Enough said.


Comments (31) + TrackBacks (0) | Category: Things I Won't Work With

July 11, 2008

Sharing the Enlightenment

Email This Entry

Posted by Derek

Here's an interesting idea: Merck, Lilly, and Pfizer are bankrolling a startup company to look for new technologies for drug development. Enlight Biosciences will focus on the biggest bottlenecks and risk points in the process, including new imaging techniques for preclinical and clinical evaluation of drug candidates, predictive toxicology and pharmacokinetics, clinical biomarkers, new models of disease, delivery methods for protein- and nucleic acid-based therapies, and so on.

It's safe to say that if any real advances are made in any of these, the venture will have to be classed as a success. These are hard problems, and it's not like there's been no financial incentive to solve any of them. (On the contrary - billions of dollars are out there waiting for anyone who can truly do a better job at these things). I wish these people a lot of luck, and I'm glad to see them doing what they're doing, but I do wish that there were more details available on how they plan to go about things. The opening press release leaves a lot of things unspoken, no doubt by design. (For instance, where are the labs going to be? What's the hoped-for balance of industry types to academics? How many people do they plan to have working on these things, and how will the companies involved plan to share the resulting technologies?)

Enlight is a creation of Puretech Ventures, a Boston VC firm that's been targeting early-stage ideas in these areas. Getting buy-in from the three companies above will definitely help, but their commitment isn't too clear at present. For now, it looks like they're getting to take a fresh look at some areas of great interest, without necessarily having to spend a lot of their own money. The press release says that Enlight will "direct up to $39 million" toward the areas listed on their web site, but those problems will eat thirty-nine million dollars without even reaching for the salt. Further funding is no doubt in the works, with the Merck/Pfizer/Lilly names as a guarantee of seriousness, and if any of these projects pan out, the money will arrive with alacrity.

Comments (11) + TrackBacks (0) | Category: Business and Markets | Drug Assays | Drug Development

July 10, 2008

More on Outsourcing

Email This Entry

Posted by Derek

In the wake of continued expansion of medicinal chemistry efforts in China, a discussion between me and some of my colleagues at work had me sticking to my positions: (1) Scientific outsourcing is not going to go away, although it may move from country to country as costs change. (2) If you’re going to stay employed as a medicinal chemist in a high-wage area like the US, you have to bring something that can’t be purchased so easily overseas.

We got to discussing what that something is. One position was that it could be fast in-house turnaround time, but while true, that one makes me uneasy. It is easier to run a fast-moving project with in-house chemistry, because you can react more quickly to changes. The cycle time for stuff that’s being done in India and China is always going to be longer. But I expect that the outsourcing outfits are working on that problem, too, in order to bring in more business. So if you’re going to compete with them just on the basis of turnaround, you’re saying that you’ll always be able to make the compounds quickly enough to justify your higher salary. Not, I think, necessarily a safe bet.

I’d rather not try to outdo the low-margin people at their own game. I held out for the high-wage advantages being things like idea generation, the ability to take on harder chemistry that doesn’t lend itself as well to making libraries of compounds, and the advantages of real-time interaction with the biologists, PK, and formulations people. You’ll note that all of these are harder than cranking out methyl-ethyl-butyl-futile analog lists. That’s outsourcing in a nutshell: the easy stuff can be done more cheaply somewhere else, so the hard stuff is going to be left for us. We’d better get used to it, and fast. (Some of that hard stuff will eventually be done offshore as well, but it’ll be more expensive to do, intrinsically, and offshore wages in general will have risen by then. The big cost savings will be at the margin, for the routine work, and I expect other countries to rise up and take business away from India and China as their economies improve).

A few more points: I get a fairly constant stream of complaints about the whole business of outsourcing, but I have to say that I don’t see the point of many of them. I mean, I understand why people are upset, but I don’t see what complaining about it is supposed to lead to. What are we going to do, lobby for a law that forbids any aspect of drug discovery to take place outside our borders? Whether you think that’s a good idea or not, it’s not going to happen, any more than we’re going to do the same thing for clothing, cars, or candy bars. If it’s feasible and effective to do something more cheaply, companies will do it more cheaply. ‘Twas ever thus.

It’s true that there’s room to argue about how appropriate all the chemistry outsourcing is. Some of it is surely being misused, and there are surely some companies that are (or will try) outsourcing too much of their expertise, then ending up less effective than they would have been. Trends are taken to extremes, before things settle back. But things are never going to settle back to the pre-outsourcing employment situation for chemists. For better or worse – and I still think that overall, it’s for better – industrial science can now be found (and contracted for) around the globe.

Comments (38) + TrackBacks (0) | Category: Business and Markets

July 9, 2008

How's The New Boss Doing?

Email This Entry

Posted by Derek

Here’s a question that came up in a discussion at work the other day: when a new head of research comes in, how long should you give them before judging how they’re doing?

That’s a tough one to answer, I think, because there are a lot of variables. First is the size of the outfit, coupled with the scope of the position. A really big organization is a very, very hard thing to change, no matter how powerful the new person might be. I’m not at all sure how possible it is to change a company’s culture, but I’m pretty sure that it requires major shock therapy to do it. (If any of you have read C. N. Parkinson on what he calls “injelititis”, you’ll know the sort of thing I have in mind).

And different levels of authority affect processes with different timelines. A head of chemistry will be able to show results in less time than a head of research, who will need less time than a head of total R&D, because that person has to wait for the clinical results. As I’ve mentioned before, that seems to me to be one of the biggest challenges in this industry – the way that big changes can take years to work their way through to the results stage. It’s hard to steer intelligently if the front tires respond ten miles after you cut the wheel over hard.

You also have to ask what the new person is being asked to do. Steer the course on something that already seems to be working? Or shake the place up and make things happen (for once)? Expand the workforce, contract it, spend money or save it, stick with the existing therapeutic areas or branch into new ones? The job descriptions on these things are pretty wide-ranging, so the evaluations have to be, too. Without a clear idea of what the new boss is trying to do, it’s impossible to say how well it’s being done. You could wind up giving bozos credit for something that had nothing to do with them, or blame excellent managers for things that were completely out of their abilities to control. (I know, I know, that kind of thing happens all the time, but you don’t have to add to it if you can help it).

So, how long for an evaluation, then? One to three years for head of chemistry, five or six for head of research, up to ten for head of R&D (if they last that long?) I'd be interested in hearing other estimates. . .

Comments (15) + TrackBacks (0) | Category: Life in the Drug Labs

July 8, 2008

Glaxo Asks the Eurocrats

Email This Entry

Posted by Derek

There was a story yesterday about GlaxoSmithKline taking what’s being called an unusual step to prioritize their clinical candidates. According to the Wall Street Journal, they invited officials from the national health care plans of several European countries to a presentation on the company’s pipeline and asked them which ones they’d be more likely to pay for (and what they’d need to see in the clinic to convince them to do that).

Actually, I think the unusual thing here is that they made a formal meeting out of the whole process. I believe that this sort of thing goes on already – after all, drug companies spend a lot of time trying to figure out the size of potential markets and what the eventual purchasers will be willing to pay. In Europe, those are the national health care systems, and if they’re not willing to pay, your drug will go nowhere. In the US, you’re going to want to sound out the big health insurance companies for the same kind of reality check.

And I don’t see how GSK showed these officials anything that you wouldn’t see (or haven’t seen) at an investor’s conference – otherwise, we’d have seen some Regulation FD disclosures, since the company’s stock is listed on the NYSE. This seems to have been a one-stop rundown of what’s already been disclosed about the whole pipeline, but with opinions specifically solicited along the way– and the company’s not obliged to say what those opinions were or what they’re doing in response to them. GSK got a lot more previously unavailable information out of this process than the health care officials did.

How much, though, will this help? For one thing, I suspect that the officials didn’t say much that GSK didn’t know about what everyone wants for a new drug. They want it to work better than anything that’s currently on the market, with fewer side effects, and for less money. (There, that was easy). And predicting the future doesn’t always work too well. The medical landscape could always change by the time the drugs make it up to the regulatory stage. There will also be a lot more information (good and bad) about the compounds themselves by that time, much of which could make these earlier discussions moot. “Remember that oncology drug we were developing? Well, turns out that it doesn’t work against quite as many different tumors as we were hoping, but. . .” or “Remember that CNS drug we were telling you about back in ’08? Well, turns out that it also has this little cardiovascular thing going, too, and. . .” In the end, the drugs will do what they will in the clinic, and the company will have to bring what it has, not what the regulators asked for.

And even though companies are already supposed to be doing this kind of legwork, there are still some spectacular disconnects. Sanofi-Aventis, for example, did manage to get Acomplia (rimonabant) on the market in Europe (which is more than they ever managed in the US), but they didn’t get the national health care to pay for it. More recently, as in "yesterday", the UK's health care system just told Glaxo itself that they're not going to pay for Tykerb/Tyverb (lapatinib), because they don't see the benefit for the price. And when we’re talking about totally mistaken ideas about market size and acceptance, how can we not mention Pfizer’s Exubera?

Comments (10) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Regulatory Affairs

July 7, 2008

Pfizer's Prospects: Just Ducky

Email This Entry

Posted by Derek

I thought I’d start out the week by opening the mailbag for a recent reply to my posts about Pfizer’s research cutbacks. Here’s a perspective that you won’t get from me, at any rate:

You never surprise me of your uncanny ability to cast good news in a negative light. Pfizer has been a bloated company following its acquisitions of Warner Lambert and Pharmacia & Upjohn. The company should have rationalized its workforce, including sales, marketing, and most especially R&D, a long time ago. So, hopefully, you are correct and there will be massive layoffs in R&D soon. Why should Pfizer spend all that money on high risk, low probability of success R&D projects? Pfizer's belated cost-cutting will make it a leaner and more focused company. All the bad news is out there. Pfizer generates over $7 billion in free cash flow annually and pays a 7.4% dividend. Projected 2012 earnings per share (without Lipitor) are $2.05. So the stock is trading today fully discounting Lipitor and any possible good news the next 5 years. Does that really make sense to you? So keep up your trash talk, so to speak. Pfizer today is money in the bank. The lower you can drive the stock price, the greater the future return. I just love folks like you who help to create great buying opportunities. Are you certain you're not buying Pfizer as you trash talk??

My response? Well, I can reply on several levels. I’m actually going to skip the outraged how-dare-you stuff about what a great thing it is that all those research people are losing their jobs, though. Let’s just take that as having been delivered, because I think a lot of good invective would just be wasted, anyway. We’ll keep this on a strictly business level, since my correspondent is nothing if not all business.

And from a business perspective, he has the beginning of a point. As many readers can attest, Pfizer’s in-house research productivity has not been good – at least, nowhere near as good as it’s had to be to sustain a company as huge as Pfizer. (There’s the problem, actually – as I’ve said before, the one thing that certainly doesn’t scale when a company gets larger is research productivity). So from my correspondent’s perspective, what do you do with the underperforming units of a company? You lop ‘em off, like pruning a shrub to get rid of unsightly branches.

Of course, one branch of a bush is pretty much like another as far as the survival of the whole plant goes, but cutting the R&D out of an R&D organization is not without risks. A Pfizer investor might be excused for forgetting that, since most of the company’s money has been made off the research of other labs, but the Lipitors do have to come from somewhere, eventually. And try as I might, I just can’t see Pfizer buying its way out of its current troubles. So, why should Pfizer spend its money on those "high risk, low probability of success R&D projects"? Because that's the only kind of R&D projects there are.

Now, as to whether all the bad news is already out there, I won't speculate. But I do know that if I had a dollar for every time someone proclaimed that all the bad news was already in some company's stock, I wouldn't have to work for a living. I invite my correspondent, though, to take a look at the company's history before sitting back and trusting those EPS numbers from the past. Let's take a trip down memory lane, back to the days of 2002, when the analysts said that it was going to earn about $1.60 per share for that year, $1.84 in 2003, and $2.14 per share in 2004. Watch it go! And after that, hey, who knew. . .well, reality intervened on those forecasts, but by 2005, now, double-digit growth was on the way.

Let's take a look at the company's actual financials and stock price over that period. It isn't inspiring. Click around on that chart: if you'd bought Pfizer ten years ago, you would have been flat with the index until early 2004, but since then it's been a disaster. Now, like my correspondent, you may be able to look at this and figure that hey, what could go wrong, and that all the bad news just has to be in by now, and that those earnings forecasts will finally start working out. Or. . .

So let's file that statement away for future reference: "Pfizer today is money in the bank". That's July of 2008, folks, and if you'd like to put some of your cash down on that statement, PFE is available during normal trading hours. I'll sit this one out.

Comments (38) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

July 4, 2008

Happy Fourth of July

Email This Entry

Posted by Derek

This, at least, I have observed in forty-five years: that there are men who search for it [truth], whatever it is, wherever it may lie, patiently, honestly, with due humility, and that there are other men who battle endlessly to put it down, even though they don't know what it is. To the first class belong the scientists, the experimenters, the men of curiosity. To the second belong politicians, bishops, professors, mullahs, tin pot messiahs, frauds and exploiters of all sorts - in brief, the men of authority. . .All I find there is a vast enmity to the free functioning of the spirit of man. There may be, for all I know, some truth there, but it is truth made into whips, rolled into bitter pills. . .

I find myself out of sympathy with such men. I shall keep on challenging them until the last galoot's ashore.

- H. L. Mencken, "Off the Grand Banks", 1925

Comments (7) + TrackBacks (0) | Category: Blog Housekeeping

July 3, 2008

I Can Has Ugly Molecules?

Email This Entry

Posted by Derek

A colleague and I were talking the other day about some of the molecules that turn up when you dig through a company's internal database. This was a favorite sport of mine during slow afternoons at the Wonder Drug Factory - I would put in a query for bizarre or unlikely chemical groups and see what fell out. I was rarely disappointed - eventually I assembled a folder of the most hideous examples, which never failed to astound.

The compound collection at my current employer isn't nearly so weird, fortunately. But every drug company has large lists of compounds that aren't so attractive as leads, because they were made in the last stages of previous projects. This is a well-known problem, often referred to as a gap between "drug-likeness" and "lead-likeness". For the most part, the compounds you start a project with don't get smaller - they get bigger, as people hang more things off of them to get more potency, selectivity, or what have you. So you're better off starting as small as you feasibly can, giving you room for this to occur without taking you off into the territory of too-huge-to-ever-work. (That's one of the fundamental ideas behind the vogue for fragment-based drug discovery, for example).

"Too-huge-to-work" is a real category, as my industry readers will gladly verify. I think that the "Rule of Five" cutoffs have been sometimes applied a little too mindlessly, but there's no getting around the fact that if your latest molecule weighs 750 and has thirteen nitrogen atoms in it, the odds of it being a drug are rather slim. As my colleague put it, when you make something like that and send it in for testing, what you're saying is "I know that almost every molecule that looks like this fails. But I'm different. I feel lucky". And that's no way to run a research program. Given finite time and finite money, you're better off prospecting in chemical areas with better chances.

So what to do? We kicked around the idea of setting up some filters in the compound registration system itself - if someone tries to send in some horrible battle cruiser of a molecule, the system would make a puking noise or something and refuse to register the compound at all. There would have to be be some sort of override (perhaps for a higher-level manager to authorize) for those times when you actually have evidence that the ugly molecule works, but maybe the "You Lose: Make Something Else" screen would focus attention on the properties of what's being made. Of course, if anyone ever implemented this, the arguing would begin about where to draw the line (maybe there'd be a yellow "warning zone" in between), but I think that everyone agrees that at some point a line should be drawn.

So, for my readers around the industry - do you have such a cutoff? Can you register any crazy compound that crosses your bench, or does your company's software fight back? If so, what's the feedback - beep, e-mail warning, electric shock? Inquiring minds want to know.

Comments (26) + TrackBacks (0) | Category: Life in the Drug Labs

July 2, 2008

More Pfizer Layoffs?

Email This Entry

Posted by Derek

Unfortunately, I’m getting reports of significant chemistry layoffs coming this fall at Pfizer’s Groton facility. Rumors of all sorts seem to be going around: one indication is that this is going to hit both PhD and associate chemists, as opposed to some earlier reorganizations there which mostly seemed to let lab heads go. The timing is also uncertain, but September/October seems to be the average of what I’m hearing. I assume that biology and other areas will feel the tremors, too, but I have no information about them. There's nothing on the news wires about any of this, so it's not at the official announcement stage, but people seem to be getting braced.

I’m not happy to hear about this kind of thing, but I can’t say that it’s a surprise, either. Pfizer is going to be having a rough time of it for years to come, what with the Lipitor patent expiration coming closer. And as fate would have it, the company will get to feel that one about as hard as possible, because the various things that were going to cushion the blow haven’t worked out so well.

Think about it – Celebrex was the whole driving force for the Pharmacia/Upjohn acquisition, and just look at it now. Compared to what it was supposed to be by 2008, it’s in terrible shape. Then you have the gigantic failure of torcetrapib, the CETP inhibitor that was going to extend the Lipitor franchise and make it even bigger. That was in late 2006, and the echos have not died away even now. And then there’s the ruinous failure of Exubera, the inhaled insulin that was going to be a runaway best seller all its own. (Oh, it really was, although it’s hard to remember that - a reader sent me a 2006 analyst report (Hambrecht) which is just giddy with expectations – Pfizer’s 1.2 billion sales projection is clearly way too low, you see, and the brokerage’s own 2.5 billion might be conservative. Heck, 5 billion in sales is “very achievable” by 2010, so you’d better load up now, because the ship is sailing, the train’s leaving the station, and so on. . .ah, Wall Street.)

So, Pfizer’s buffers are exhausted, but the big beaker of fuming nitric acid is still going to unload on schedule. It’s going to be a tough place to work, and it’s going to be a tough stock to own. If you have a chance to do anything about either of those situations, I’d look into it.

Comments (41) + TrackBacks (0) | Category: Current Events

July 1, 2008

Leaving Comments: A Fix

Email This Entry

Posted by Derek

I know that a lot of people have been having trouble leaving comments here over the past few weeks, with plenty of "Too Many Comments" error messages coming up. I see from today's comment thread that there's a brute-force fix for this - deleting the cookie that this site leaves.

In Firefox, you can do that by going to Preferences, then Privacy, then Show Cookies. Find the "Corante.com" one and kill it - here's hoping that fixes the problem and that it doesn't show up again!

Comments (11) + TrackBacks (0) | Category: Blog Housekeeping

The Gates Foundation: Dissatisfied With Results?

Email This Entry

Posted by Derek

Well, since last week around here we were talking about how (and how not to) fund research, I should mention that Bill Gates is currently having some of the same discussions. He’s doing it with real money, though, and plenty of it.

The Bill and Melinda Gates Foundation definitely has that – the question has been how best to spend it. They started out by handing out money to the top academic research organizations in the field, just to prime the pump. Then a few years ago, the focus turned to a set of “Grand Challenges”, fourteen of the biggest public health problems, and the foundation began distributing grant money to fight them. But according to this article, from a fellow who’s writing a book on the topic, Gates hasn’t necessarily been pleased with the results so far:

”. . .Gates expected breakthroughs as he handed out 43 such grants in 2005. He had practically engineered a new stage in the evolution of scientific progress, assembling the best minds in science, equipped with technology of unprecedented power, and working toward starkly-defined objectives on a schedule.

But the breakthroughs are stubbornly failing to appear. More recently, a worried Gates has hedged his bets, not only against his own Grand Challenge projects but against how science has been conducted in health research for much of the last century.”

My first impulse on hearing this news is not, unfortunately, an honorable one. To illustrate: I remember a research program I worked on at the Wonder Drug Factory, one that started with a series of odd little five-membered-ring molecules. Everyone who looked them over had lots of ideas about what should be done with them, and lots of ideas about how to make them. The problem was, the latter set of ideas almost invariably failed to work.

This was a terribly frustrating situation for the chemists on the project, because we kept presenting our progress to various roomfuls of people, and the same questions kept coming up, albeit in increasingly irritated tones. “Why don’t you just. . .” We tried that. “Well, it seems like you could just. . .” It seemed like that to us, too, six months ago. “Haven’t you been able to. . .” No, that doesn’t work, either. I know it looks like it should. But it doesn’t. Progress was slow, and new people kept joining the effort to try to get things moving. They’d come in, rolling up their sleeves and heading for the fume hood, muttering “Geez, do I have to do everything myself?”, and a few weeks later you’d find them frowning at ugly NMR spectra next to flasks of brown gunk, shaking their heads and talking to themselves.

I’d gone through the same stage myself, earlier, so my feelings about the troubles of the later entrants to our wonderful project devolved to schadenfreude which, as mentioned, is not the most honorable of emotions. I have to resist the same tendency when reading about the Gates Foundation – sitting back and saying “Hah! Told you this stuff was hard! Didn’t believe it, did you?” isn’t much help to anyone, satisfying though it might be on one level. I’m cutting Bill Gates more slack than I did Andy Grove of Intel, though, since Gates seems to have taken a longer look at the medical research field before deciding that there’s something wrong with it. I note, though, that we now have well-financed representatives of both the hardware and software industries wondering why their well-honed techniques don’t seem to produce breakthroughs when applied to health care.

Now the Gates people are trying a new tactic. The “Explorations” program, announced a few months ago, is deliberately trying to fund people outside the main track of research in its main areas of focus (infectious disease) in an effort to bring in some new thinking. I’ll let Tadataka Yamata of the Gates Foundation sum it up, from the NEJM earlier this year:

”New ideas should not have to battle so hard for oxygen. Unfortunately, they must often do so. Even if we recognize the need to embrace new thinking — because one never knows when a totally radical idea can help us tackle a problem from a completely different angle — it takes humility to let go of old concepts and familiar methods. We have seemed to lack such humility in the field of global health, where the projects related to diseases, such as HIV, malaria, and tuberculosis, that get the most funding tend to reflect consensus views, avoid controversy, and have a high probability of success, if "success" is defined as the production of a meaningful but limited increase in knowledge. As a result, we gamble that a relatively small number of ideas will solve the world's greatest global health challenges. That's not a bet we can afford to continue making for much longer.”

What’s interesting about this is that the old-fashioned funding that Yamata is talking about is well exemplified by the previous Gates Foundation grants. After last week’s discussion here about “deliverables” in grant awards, it’s interesting to look back at the reaction to the 2003-2005 round of “Grand Challenges” funding:

”Researchers applying for grants had to spell out specific milestones, and they will not receive full funding unless they meet them. "We had lots of pushback from the scientific community, saying you can't have milestones," says Klausner. "We kept saying try it, try it, try it." Applicants also had to develop a "global access plan" that explained how poor countries could afford whatever they developed.

Nobel laureate David Baltimore, who won a $13.9 million award to engineer adult stem cells that produce HIV antibodies not found naturally, was one of the scientists who pushed back. "At first, I thought it was overly bureaucratic and unnecessary," said Baltimore, president of the California Institute of Technology in Pasadena. "But as a discipline, to make sure we knew what we were talking about, it turned out to be interesting. In no other grant do you so precisely lay out what you expect to happen."

I have to think, then, that in no other grant are the chances of any breakthrough result so slim. It would be interesting to know what the Gates people think, behind closed doors, of the return they’ve gotten on the first round of grant money, but perhaps the advent of the Explorations program is already comment enough. (One round of Explorations funding has already taken place, but a second round is coming up this fall. You can start your application process here).

The next question is, naturally, how well the Explorations program might work – but that’s a big enough topic for a post of its own. . .

Comments (28) + TrackBacks (0) | Category: General Scientific News | Who Discovers and Why