Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

March 31, 2008

Writing It Down

Email This Entry

Posted by Derek

So, what’s easier: writing a blog entry every working day, or writing one scientific publication? The blog, the blog, no doubt about it. I write quickly, and pretty much always have, but putting a paper together is still slow work.

One difficulty is the length restriction, especially for a communication. Working in all the necessary details while still telling a coherent story are not always compatible goals, and doing it within four printed pages can be a real challenge. Many med-chem projects are pretty shaggy by the time it comes to publish, and there’s no way to get in all the twists and turns (nor would anyone want to read about them, in most cases).

So you have to decide how the work is going to be presented to give a readable but accurate account. The problem is, almost any project can be turned into a flowing narrative if you’re willing to throw away enough work and to lie about the rest. If you’re not going to do that (and I recommend against it!) then you have a harder job on your hands. You’re going to have to leave out something, but you’re going to have to be able to recognize what’s left.

It’s for sure that you’re not going to be able to talk about every single analog, so one good technique is to narrow down to representative compounds. That won’t always be popular with your co-authors, though. Odds are that some of the people who worked on the project will end up feeling slighted when their contributions make it onto the bottom few rows of a table, or are just mentioned in passing in the text: “Substitution with groups larger than methyl led to rapid loss of activity (data not shown), so our attention then turned to. . .”

Another difficulty is that series of analogs aren’t often made in the order that makes sense in hindsight. I think it’s acceptable to mess around with the timelines a bit in presenting the data, as long as you aren’t rearranging things that had an impact on the main flow of the SAR. That course of events you’re stuck with, and you just have to find a way to make your decisions seem reasonable. It helps if they actually were reasonable, of course. That condition does not always obtain.

As for writing style, I recommend a difficult one: the kind that you hardly see at all. Keeping someone reading along while you deliver the dry, concentrated, chewy news isn’t easy to do, but it’s a goal worth struggling for. Most papers scan as if they’ve been sprayed with light coating of eye repellent: you slide right off of them after a paragraph or so. If you can avoid that, you’re already well out from the pack. As for extra touches, I actually enjoy seeing a bit of personality and humor come through in a scientific paper, but getting that bit right is very difficult. Getting it wrong is very easy, though, and the results are unpleasant. If you’re not sure of your touch, keep your hands off the spice rack. This isn’t the time to be Henry James (is there ever a time?), William Faulkner, or Marcel Proust. If you’re going to emulate a novelist, think Hemingway. Early Hemingway. If you want a journalistic role model, you can aim for Orwell, but that’s a high mark – he had style to burn, but managed not to call attention to it. Good luck!

Comments (7) + TrackBacks (0) | Category: The Scientific Literature

March 28, 2008

RNA Interference: Even Trickier Than You Thought

Email This Entry

Posted by Derek

It’s been a while since I talked about RNA interference here. It’s still one of those tremendously promising therapeutic ideas, and it’s still having a tremendously hard time proving itself. Small RNA molecules can do all sorts of interesting and surprising things inside cells, but the trick is getting them there. Living systems are not inclined to let a lot of little nucleic acid sequences run around unmolested through the bloodstream.

The RNA folks can at least build on the experience (long, difficult, expensive) of the antisense DNA people, who have been trying to dose their compounds for years now and have tried out all sorts of ingenious schemes. But even if all these micro-RNAs could be dosed, would we still know what they’re going to do?

A report in the latest Nature suggests that the answer is “not at all”. This large multi-university group was looking at macular degeneration, a natural target for this sort of technology. It’s a serious disease, and it occurs in a privileged compartment of the body, the inside of the eye. You can inject your new therapy directly in there, for example (I know, it gives me the shivers, too, but it sure beats going blind). That bypasses the gut, the liver, and the bloodstream, and that humoral fluid of the eye is comparatively free of hostile enzymes. (It’s no coincidence that the antisense and aptamer people have gone after this and other eye diseases as well).

Angiogenesis is a common molecular target for macular regeneration, since uncontrolled formation of new capillaries is a proximate cause of blindness in such conditions. (That target has the added benefit of giving your therapy a possible entry into the oncology world, should you figure out how to get it to work well here). VEGF is the prototype angiogenesis target, so you’d figure that RNA interference targeting VEGF production or signaling would work as well as anything could, as a first guess.

And so it does, as this team found out. But here comes the surprise: when the researchers checked their control group, using a similar RNA that should have been ineffective, they found that it was working just fine, too – just as well as the VEGF-targeted ones, actually. Baffled, they went on to try a host of other RNAs. Reading the paper, you can just see the disbelief mounting as they tried various sequences against other angiogenic targets (success!), nonangiogenic proteins (success!?), proangiogenic ones that should make the disease worse (success??), genes for proteins that aren’t even expressed in the eye (success!), sequences against RNAs from plants and microbes that don’t even exist in humans at all (oh God, success again), totally random RNAs (success, damnit), and RNAs that shouldn’t be able to silence anything because they’ve got completely the wrong sort of sequence (oh the hell with it, success). Some of these even worked when injected i.p., into the gut cavity, instead of into the eye at all, suggesting that this was a general mechanism that had nothing to do with the retina.

As it turns out, these things are acting through hitting a cell surface receptor, TLR3. And all you need, apparently, is a stretch of RNA that’s at least 21 units long. Doesn’t seem to matter much what the sequence is – thus all that darn success with whatever they tried. Downstream of TLR3 come induction of gamma-interferon and IL-12, and those are what are doing the job of shutting down angiogenesis. (Off-target effects involving these have been noted before with siRNA, but now I think we’re finally figuring out why).

What does this all mean? Good news and bad news. The companies that are already dosing RNAi therapies for macular degeneration have just discovered that there's an awful lot that they don't know about what they're doing, for one thing. On the flip side, there are a lot of human cell types with TLR3 receptors on them, and a lot of angiogenic disorders that could potentially be treated, at least partially, by targeting them in this manner. That’s some good news. The bad news is that most of these receptors are present in more demanding environments than the inside of the eye, so the whole problem of turning siRNAs into drugs still looms large.

And the other bad news is that if you do figure out a way to dose these things, you may well set off TLR3 effects whether you want them or not. Immune system effects on the vasculature are not the answer to everything, but that may be one of the answers you always get. And this sort of thing makes you wonder what other surprising things systemic RNA therapies might set off. We will, in due course, no doubt find out. More here from John Timmer at Nobel Intent, who correctly tags this as a perfect example of why you want to run a lot of good control experiments. . .

Comments (4) + TrackBacks (0) | Category: Biological News | Drug Development

March 27, 2008

Start Small, Start Right

Email This Entry

Posted by Derek

There’s an excellent paper in the most recent issue of Chemistry and Biology that illustrates some of what fragment-based drug discovery is all about. The authors (the van Aalten group at Dundee) are looking at a known inhibitor of the enzyme chitinase, a natural product called argifin. It’s an odd-looking thing – five amino acids bonded together into a ring, with one of them (an arginine) further functionalized with a urea into a sort of side-chain tail. It’s about a 27 nM inhibitor of the enzyme.

(For the non-chemists, that number is a binding affinity, a measure of what concentration of the compound is needed to shut down the enzyme. The lower, the better, other things being equal. Most drugs are down in the nanomolar range – below that are the ulta-potent picomolar and femtomolar ranges, where few compounds venture. And above that, once you get up to 1000 nanomolar, is micromolar, and then 1000 micromolar is one millimolar. By traditional med-chem standards, single-digit nanomolar = good, double-digit nanomolar = not bad, triple-digit nanomolar or low micromolar = starting point to make something better, high micromolar = ignore, and millimolar = can do better with stuff off the bottom of your shoe.

What the authors did was break this argifin beast up, piece by piece, measuring what that did to the chitinase affinity. And each time they were able to get an X-ray structure of the truncated versions, which turned out to be a key part of the story. Taking one amino acid out of the ring (and thus breaking it open) lowered the binding by about 200-fold – but you wouldn’t have guessed that from the X-ray structure. It looks to be fitting into the enzyme in almost exactly the same way as the parent.

And that brings up a good point about X-ray crystal structures. You can’t really tell how well something binds by looking at one. For one thing, it can be hard to see how favorable the various visible interactions might actually be. And for another, you don’t get any information at all about what the compound had to pay, energetically, to get there.

In the broken argifin case, a lot of the affinity loss can probably be put down to entropy: the molecule now has a lot more freedom of movement, which has to be overcome in order to bind in the right spot. The cyclic natural product, on the other hand, was already pretty much there. This fits in with the classic med-chem trick of tying back side chains and cyclizing structures. Often you’ll kill activity completely by doing that (because you narrowed down on the wrong shape for the final molecule), but when you hit, you hit big.

The structure was chopped down further. Losing another amino acid only hurt the activity a bit more, and losing still another one gave a dipeptide that was still only about three times less potent than the first cut-down compound. Slicing that down to a monopeptide, basically just a well-decorated arginine, sent the activity down another sixfold or so – but by now we’re up to about 80 micromolar, which most medicinal chemists would regard as the amount of activity you could get by testing the lint in your pocket.

But they went further, making just the little dimethylguanylurea that’s hanging off the far end. That thing is around 500 micromolar, a level of potency that would normally get you laughed at. But wait. . .they have the X-ray structures all along the way, and what becomes clear is that this guanylurea piece is binding to the same site on the protein, in the same manner, all the way down. So if you’re wondering if you can get an X-ray structure of some 500 micromolar dust bunny, the answer is that you sure can, if it has a defined binding site.

And the value of these various derivatives almost completely inverts if you look at them from a binding efficiency standpoint. (One common way to measure that is to take the minus log of the binding constant and divide by the molecular weight in kilodaltons). That’s a “bang for the buck” index, a test of how much affinity you’re getting for the weight of your molecule. As it turns out, argifin – 27 nanomolar though it be – isn’t that efficient a binder, because it weighs a hefty 676. The binding efficiency index comes out to just under 12, which is nothing to get revved up about. The truncated analogs, for the most part, aren’t much better, ranging from 9 to 15.

But that guanylurea piece is another story. It doesn’t bind very tightly, but it bats way above its scrawny size, with a BEI of nearly 28. That’s much more impressive. If the whole argifin molecule bound that efficiently, it would be down in the ten-to-the-minus nineteenth range, and I don’t even know the name of that order of magnitude. If you wanted to make a more reasonably sized molecule, and you should, a compound of MW 400 would be about ten femtomolar with a binding efficiency like that. There’s plenty of room to do better than argifin.

So the thing to do, clearly, is to start from the guanylurea and build out, checking the binding efficiency along the way to make sure that you’re getting the most out of your additions. And that is exactly the point of fragment-based drug discovery. You can do it this way, cutting down a larger molecule to find what parts of it are worth the most, or you can screen to find small fragments which, though not very potent in the absolute sense, bind very efficiently. Either way, you take that small, efficient piece as your anchor and work from there. And either way, some sort of structural read on your compounds (X-ray or NMR) is very useful. That’ll give you confidence that your important binding piece really is acting the same way as you go forward, and give you some clues about where to build out in the next round of analogs.

This particular story may be about as good an illustration as one could possibly find - here's hoping that there are more that can work out this way. Congratulations to van Aalten and his co-workers at Dundee and Bath for one of the best papers I've read in quite a while.

Comments (12) + TrackBacks (0) | Category: Analytical Chemistry | Drug Assays | In Silico

March 26, 2008

The Lucky Bonus Pack

Email This Entry

Posted by Derek

I ran a reaction the other day which gave me two very similar products. That's not so uncommon, but this one really shouldn't have been able to do that. (For the chemists in the audience, these two so similar, in fact, that the usual LC/MS conditions only showed one peak. NMR tells you different, though, and a painstaking multiple-elution TLC in some nonstandard solvent mixtures resolves the two spots).

I thought about the problem a bit, and decided that the first thing to do was to check my starting material. And there they were: two very similar starting materials, together in the same jar. Mind you, there's only one structure on the label. No wonder the stuff was so sticky. I'd received the Special Extended Edition without knowing it - odds are, the supply company sent it to me without knowing it, either, although that'll change when they get my e-mail. One of the components, anyway, seems to be the right stuff, so I suppose it could be worse.

This happens more often than it should, often enough that every working chemist has a similar story or two. And it doesn't correlate that well with the size or renown of the company you're ordering from, since everyone sources material from all over the place. Little mom-and-pop operations have sent me plenty of fluffy, flawless stuff, while Aldrich has on occasion mailed me goo. (On another occasion they mailed me a perfectly empty sealed ampoule with a label on it, but since the label didn't read "Air", I thought I had reason to complain). That doesn't mean that reputations don't vary. Even though they're now part of the same company as Aldrich and Sigma, those Swiss fanatics at Fluka do this sort of thing to you comparatively less often than their cohorts.

Not all the unopened slime you encounter is necessarily the fault of the company that shipped it. Some things just aren't stable, or at least aren't so stable in the back of an unventilated truck or sitting out in the sun on a loading dock. And the longer it is after an order's been received, the more the problem is likely to be with the receiver. A look at the condition of the vials in a drug company's compound repository will convince anyone that the kinds of molecules we like may not have indefinite shelf lives.

In this case, it's going to be easier to clean up the starting material and run the reaction again than it would be to clean up my dueling products. Easier yet would be to get a bottle of the right stuff from the supplier, but this one isn't exactly a high-volume compound, and I suspect that it's all the same nasty batch on their shelves. Worth a try, though. And thus does science stagger on.

Comments (19) + TrackBacks (0) | Category: Life in the Drug Labs

March 25, 2008

Getting To Lyrica

Email This Entry

Posted by Derek

There’s an interesting article in Angewandte Chemie by Richard Silverman of Northwestern, on the discovery of Lyrica (pregabalin). It’s a rare example of a compound that came right out of academia to become a drug, but the rest of its story is both unusual and (in an odd way) typical.

The drug is a very close analog of the neurotransmitter GABA. Silverman’s lab made a series of compounds in the 1980s to try to inhibit the aminotransferase enzyme (GABA-AT) that breaks GABA down in the brain, as a means of increasing its levels to prevent epileptic seizures. They gradually realized, though, that their compounds were also hitting another enzyme, glutamic acid decarboxylase (GAD), which actually synthesizes GABA. Shutting down the neurotransmitter’s breakdown was a good idea, but shutting down its production at the same time clearly wasn’t going to work out.

So in 1988 a visiting Polish post-doc (Ryszard Andruszkiewicz) made a series of 3-alkyl GABA and glutamate analogs as another crack at a selective compound. None of them were particularly good inhibitors – in fact, most of them were substrates for GABA-AT, although not very good ones. But (most weirdly) they actually turned out to activate GAD, which would also work just fine to raise GABA levels. Northwestern shopped the compounds around because of this profile, and Parke-Davis took them up on it. One enantiomer of the 3-isobutyl GABA analog turned out to be a star performer in the company’s rodent assay for seizure prevention, and attempts to find an even better compound were fruitless. The next few years were spent on toxicity testing and optimizing the synthetic route.

The IND paperwork to go into humans was filed in 1995, and clinical trials continued until 2003. The FDA approved the drug in 2004, and no, that’s not an unusual timeline for drug development, especially for a CNS compound. And there you’d think the story ends – basic science from the university is translated into a big-selling drug, with the unusual feature of an actual compound from the academic labs going all the way. Since I’ve spent a good amount of time here claiming that Big Pharma doesn’t just rip off NIH-funded research, you’d think that this would be a good counterexample.

But, as Silverman makes clear, there’s a lot more to the story. As it turned out, the drug’s efficacy had nothing to do with its GABA-AT substrate behavior. But further investigation showed that it’s not even correlated with its activation of the other enzyme, GAD. None of the reasons behind the compound’s sale to Parke-Davis held up, except the biggest one: it worked well in the company’s animal models.

The biologists at P-D eventually figured out what was going on, up to a point. The compound also binds to a particular site on voltage-gated calcium channels. That turns out to block the release of glutamate, whose actions would be opposed to those of GABA. So they ended up in the same place (potentiation of GABA effects) but through a mechanism that no one suspected until after the compound had been recommended for human trials! There were more lucky surprises: Lyrica has excellent blood levels and penetration into the brain, while none of the other analogs came close. As it happened, and as the Parke-Davis folks figured out, the compound was taken up by active transport into the brain (via the System L transporter), which also helps account for its activity.

And Silverman goes on to show that while the compound was originally designed as a GABA analog, it doesn’t even perform that function. It has no binding to any GABA receptor, and doesn’t affect GABA levels in any way. As far as I can see, a really thorough, careful pharmacological analysis before going into animals would probably have killed the compound before it was even tested, which goes to show how easy it is to overthink a black-box area like CNS.

So on one level, this is indeed an academic compound that went to industry and became a drug. But looked at from another perspective, it was an extremely lucky shot indeed, for several unrelated reasons, and the underlying biology was only worked out once the compound went into industrial development. And from any angle, it’s an object lesson in how little we know, and how many surprises are waiting for us. (Silverman himself, among other things, is still in there pitching, looking for a good inhibitor of GABA aminotransferase. One such drug, a compound going back to 1977 called vigabatrin, has made it to market for epilepsy in a few countries, but has never been approved in the US because of retinal toxicity).

Comments (24) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Pharmacokinetics | The Central Nervous System

March 24, 2008

That's Never Gonna Work

Email This Entry

Posted by Derek

A colleague and I were talking the other day about the (long) list of drugs that have been left for dead at some point during their development. There are some famous cases – Lipitor, for example, which wasn’t thought by many at Warner-Lambert to have a business case worth even taking into the clinic. But these things are all over the place.

One that I know about was Claritin (loratadine). Schering-Plough worked on nonsedating antihistamines for a while, without too much success, and the whole program was eventually killed. The head of research at the time stated flatly: “There are no nonsedating antihistamines”. Of course, when the first one (Seldane) came on the market, that made everyone rethink a bit. In the interim, one of the chemists had continued making compounds, despite several (increasingly testy) warnings to stop.

As it turned out, he (Frank Villani) and one of his associates (Charlie Magatti) had made loratadine itself, the nonsedating antihistamine which helped to pay everyone’s salary at Schering-Plough through the 1990s. But by the time that was worked out, Villani himself had been eased out the door (or not eased while on his way out, depending on who you talk to), in good part due to his continued work on the compounds. That head of research, to his credit, actually referred ruefully later on to his own “no nonsedating antihistamines” comment – there are plenty of other people who would have just Never Said Such a Thing At All in that position.

You can find a lot of other examples, going back a long way. Many of these are medical and marketing arguments: ACE inhibitors weren’t necessarily going to be of that much use for hypertension (how many people had high blood pressure because of problems with their renin-angiotensin system anyway?) And the K/H ATPase compounds weren’t going to be of much use for acid reflux, because the H2 antagonists had the market covered (Prilosec and its progeny managed to carve out a little market share for themselves, though). The Lipitor-won’t-make-any-money mistake falls squarely into this category.

My theory is that it’s always possible to find a list of plausible reasons why a given project, or a given drug candidate, won’t work. Finding those things is (comparatively speaking) the easy part. The hard part is working out which of those things you’re wrong about, because you’re sure to be wrong about some of them. (Of course, thinking about this stuff makes you start to wonder about the drugs that never quite made it, but would have done well if they had. Most experienced development people have a list of might-have-beens that they still wonder about, but some of those would surely have also blown up disastrously even later in the process, taking even more money with them).

Further that’ll-never-work examples are welcome in the comments. I know there must be plenty of them out there. . .

Comments (24) + TrackBacks (0) | Category: Drug Development | Drug Industry History

March 21, 2008

Pfizer Loses, So Far

Email This Entry

Posted by Derek

I wanted to follow up on the post the other day about Pfizer's attempts to open up the editorial files in various scientific journals. The decision on the New England Journal of Medicine motion hasn't come down yet, but two others have.

And Pfizer's lost both of them. The district court in Chicago rejected the company's arguments to compel JAMA and the Archives of Internal Medicine to open up their records on papers concerning Celebrex or Bexxtra. The ruling held (correctly, in my opinion) that the possible value of these documents to Pfizer's case was more than outweighed by the harm that would be done to the journals by allowing access.

And as this story at the Science web site mentions, the NEJM case may well be about to go the same way. According to the journal's attorneys, Pfizer narrowed its request to just the peer-review comments returned to the authors of the manuscripts. That seems, at least to me, to weaken the argument that these documents are of such great value to their legal case, while leaving the problem of breaching confidential peer review.

At least I think it does - I assume that Pfizer wants names attached to these things, unless they can use them in their case without attribution. Even so, that still doesn't sound like something that'll make people enthusiastic about reviewing such papers - the prospect of having their comments read off in open court. No, I think that argument that sank Pfizer's requests in Illinois still obtains, and that the Massachusetts court will rule the same way.

So if this whole issue goes away, we can relax until the next legal inspiration hits. In the interim, I still think that Pfizer should at least be vaguely ashamed of having taken this road. A confidential poll of the company's own scientists would surely find that a solid majority of them would be opposed to the whole idea of legal discovery of peer review documents. (I say that because I've hardly talked to a single chemist or biologist who didn't think the same way). That said, there aren't many companies that size whose business decisions would all survive after polls among the scientific staff. . .

Comments (9) + TrackBacks (0) | Category: The Scientific Literature | Why Everyone Loves Us

March 20, 2008

Anonymity?

Email This Entry

Posted by Derek

I see that “Kyle Finchsigmate” over at The Chem Blog is having some problems maintaining his pseudonymity at his own institution:

” I’m still befuddled why people walk up to me in the hall and talk to me about it. It’s more irritating than you can imagine. I feel like people treat me differently when they find out. . .

It has also become a liability and I’m not in the mood to juggle liabilities. Faculty and students around here have too much time on their hands to deeply contemplate the idiotic musings of a graduate student and it has handicapped me considerably. . .”

I’m not surprised. He’s given out enough details over the course of his blog for someone at his own school to figure out who he is without too much trouble, and I suspect that his distinctive habits of speech carry over into daily life as well. I enjoy some of his posts, but others (as I said about Dylan Stiles's blog) just serve to confirm for me that I am not, in fact, 25 years old.

I very briefly considered going anonymous back in 2002 when I started blogging, but realized that anyone who really wanted to would be able to do the same to me eventually. My writing isn’t as full of copulating inanimate objects as Kyle’s, but it’s also my own, and it’s also recognizable. (And if it’s not your vocabulary that’ll give you away, then it’s your opinions and your outlook).

I also figured that, one way or another, I’d like to be able to take credit for what I wrote. I lost the chance for some anonymous satire and griping by going the public route, but that’s just the sort of thing that would have caused even more trouble if (when) it was eventually traced back to me. So public disclosure it was. It’s worked out well, and I’ve never regretted it.

But I’m very glad that there were no blogging opportunities when I was a grad student. I had an awful lot to get off my chest about my grad school experience, and the opportunity to do it would have been hard to pass up. Sorrow would have been the only possible result. Actually, I’m just glad that there was no Web, period, when I was in grad school, since there’s no telling how long it would have taken me to get out of there if I’d that distraction constantly available.

So a word of warning for those of you thinking of starting a pseudonymous site: you’re heading toward a contradiction. If you’re doing so because you’re going to say things that you can’t say under your own name, you raise the chances considerably of eventually finding them there. And since the internet, for all practical purposes, Is Forever, your opinions and actions will follow you around whether you want them to or not.

Comments (16) + TrackBacks (0) | Category: Blog Housekeeping

March 19, 2008

Now Your Liver Doesn't Have to Make It For You

Email This Entry

Posted by Derek

One of the less appealing ways that companies have tried to fill their drug portfolios over the years has been to look through their current drugs in search of one with a main active metabolite. That altered structure then becomes a clinical candidate for the next generation. I’ve said bad things before about Clarinex (desloratadine), son of Claritin (loratadine), the most famous example of this practice. That “des” prefix tells you that the newer drug is just the older one minus some part of its structure, in this case, minus a carbamate group that the liver clips off anyway. Even non-chemists can see the change, looking at the top parts of the structures in those Wikipedia articles.

Now comes Pristiq (desvenlafaxine), spawn of Effexor (you guessed it, venlafaxine). This one's also a simple metabolic change, OH from O-methyl. Wyeth has done very well with Effexor over the last few years, and they’re not ready to give up on that market share once it goes off patent this year. The timing of this new drug is, as they say, no coincidence. The Carlat Psychiatry Blog, not a place to go to find lots of warm feelings for the drug industry, has its “Top Five Reasons to Forget About Pristiq”. From the way things look, I have to agree with them; at the moment it’s hard to see much need for the stuff.

But there’s a good point made there by an investigator on the clinical trials, Dr. Michael Liebowitz of Columbia. He, quite reasonably, is waiting for the market to settle whether the drug is of any use or not: “If it is useful, then it will make money for the company, and if it is not, it won’t.” Update: there's more from Liebowitz on this topic, and on follow-on CNS drugs in general.

Exactly. I’m very much in favor of letting drugs stand or fall on their merits, if any. My first guess is that Pristiq is not much of an addition to the pharmacopeia – and if it isn’t, Wyeth deserves to lose the money they’ve put into it, since that, frankly, would have been the presumption from very early in the drug’s development. They took this drug forward at their own risk, and should profit or lose by it accordingly.

One thing I’ll say for the company, though: they actually seem to be running a head-to-head study between the two drugs. That’s good to see, and it’ll be quite interesting to see what case Wyeth can make, if any, after the data come in. At least they’re not just banging on tin cans and shouting “Now with the great taste of fish!” or something. Interestingly, as a comment on the Carlat blog points out, the company has already published data on one unimpressive trial with Pristiq, and I have to thank them for doing that, too. If there was ever a head-to-head efficacy study run between Claritin and Clarinex, I definitely missed it – I’m willing to be corrected, of course, but I’m pretty sure that there never was one).

So one-and-a-half cheers for Wyeth. I wish, in most cases, that companies would avoid the metabolite-drug idea. Alternatively, I wish that everyone’s drug pipeline was well stocked enough that such follow-ups didn’t look financially appealing. But if you’re going to have them, taking an honest look at their benefits is the only way to go.

Comments (15) + TrackBacks (0) | Category: "Me Too" Drugs | Drug Development | The Central Nervous System | Why Everyone Loves Us

March 18, 2008

A Solution, Courtesy of the MIT Faculty

Email This Entry

Posted by Derek

Do drug discovery and drug marketing belong in the same company or not? That question’s been asked in several forms, but two MIT professors are taking it about as far as it can go. Stan Finkelstein and Peter Temin have a book coming out (“Reasonable Rx: Solving the Drug Price Crisis”) which proposes decoupling the two by force.

By analogy to the way the electrical power industry was divided into generation and distribution sectors, they propose splitting up the pharmaceutical business into drug discovery firms and drug marketing firms. But wait, there’s more: they also would like to have an “independent, public, non-profit Drug Development Corporation” formed to act as an intermediary between the two:

“It is a two-level program in which scientists and other experts would recommend to decision-makers which kinds of drugs to fund the most. This would insulate development decisions from the political winds," (Finkelstein) said.

The MIT press release also talks up the other putative benefits of this plan, such as how it would “insulate drug development from the blockbuster mentality, which drives companies to invest in discovering a billion-dollar drug to offset their costs”. There’s a lot to talk about in this idea, but here are some of my first impressions:

1. The electric power analogy is probably specious. Generating electricity is, for the most part, a sure thing. If you build a big coal-fired generating plant, which we most certainly know how to do, it will generate electricity for you. And its output will be proportional to how fast the turbines spin. Research is most profoundly different, as many executives from other industries have found to their sorrow. You can turn the crank like crazy and have hardly anything come out the other end at all – ask Pfizer – and that’s because we do not have a very clear idea of how to discover drugs.

Another problem is that electricity is fungible. The electric power coming from one plant is exactly the same as that coming from another, and can be pooled and distributed in exactly the same way. Every drug, however, is different. The electric power industry would be rather changed in appearance if some kilowatts were ten times as profitable as the others, but only for a few years after the generating plant came on line, or if particular kilowatts were only of benefit to certain homes or businesses and had to be routed there specifically.

2. Where are these experts, exactly? I have an instinctive distrust of plans that call for a board of dispassionate technocrats to step in and do things that the market is supposedly doing by itself. It’s not that such things absolutely can’t work, but my default belief is that they won’t work as well as their planners hope. Finkelstein and Termin’s “DDC” proposal is just the sort of thing I worry about. I can see establishing something to make sure that less immediately profitable diseases get R&D directed to them, but running the whole industry like an NIH grant review board sound like a recipe for disaster.

3. To some extent, the industry is already divided in the manner proposed. But it's not done through review boards, it's done through business dealings. Many small firms don't have the resources to develop their own drug candidates, so they shop them to larger firms who can handle the clinical, regulatory, and marketing aspects of the process. This goes on all the time. It's been proposed (many times) that one or more large companies might shut their own research down completely and serve as a clearinghouse for the smaller ones in just this way, but no one has been willing to take the plunge. My guess is that there aren't enough good ideas out there for sale to keep a company going without having some of its own research in the game; I feel sure that the numbers have been run on this idea more than once.

Of course, these deals are made on the basis of who will make money, rather than how much society will benefit. But you'd be surprised at how often those two can overlap.

Where do the costs go? I suppose I'll have to read the book to get the details, but I'm not sure how money is supposed to be saved here. The cost of developing drugs doesn't look like it'll be changed much, since Temin and Finkelstein aren't coming in with any insights into human biochemistry or any new ways for us to predict efficacy or side effects. Profits, however, would surely be reduced: the the DDC that they propose would seem to exist to recommend that less profitable drugs be developed, for the good of society, rather than the ones that companies believe that they can make the most money from.

I note that the press release makes much of climate change and globalization, probably because in many circles these days you can't be taken seriously unless you mention those somewhere. This is done in the context of tropical diseases possibly making inroads into the US and other industrialized countries. But if that were to happen, research on these diseases would become much more profitable - which I realize is a crude way of looking at it, but the market doesn't have to be pretty to work. And I think the process would be slow enough to fit the timelines for drug discovery as it's practiced today - an example would be the burst of work on avian influenza in the last few years. A sudden epidemic would be bad news indeed, and might well catch the industry flat-footed, but that's going to be hard to avoid under any drug development regime.

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Prices | Why Everyone Loves Us

March 17, 2008

You Get What You Pay For?

Email This Entry

Posted by Derek

I'm a bit under the weather today, so this one will be short. Since we were talking about CNS drugs and clinical trials the other day, I thought I'd mention this article from Neuropsychopharmacology.

The authors compare reported trials of first- and second-generation antipsychotics, looking to see if potentially biasing factors have skewed the results. One (perhaps surprising) result is that the authors couldn't confirm that the newer drugs necessarily work better through showing fewer extrapyramidal side effects (those are the muscle and coordination problems seen with many drugs in this class). While they may well show fewer EPS problems, that doesn't seem to be related to their efficacy.

Something of a relief is that the efficacy of the various drugs didn't seem to be related to whether or not the drug industry sponsored the trials involved. Given the publication bias of submitting favorable results (and given the obvious commercial interests involved), that's perhaps surprising. But it's welcome data to bring up the next time someone e-mails me about the eeevil Pharma companies and their bought-and-paid-for studies. I don't get a steady stream of that stuff, fortunately, but it still shows up often enough.

I still keep an occasional eye on the antipsychotic drugs, since that was the first therapeutic area I ever worked in when I joined the industry. The project came to a bad end, which was probably a good thing for my professional development. We took the drug into Phase I, gave substantial doses to normal volunteers, and rejoiced when it did nothing to them whatsoever. Then the compound went into Phase II and into real schizophrenics, and it did nothing whatsoever to them either, sad to say. And so it goes in CNS drug development. I don't think that study was ever published; if it had been it would have presumably made the correlation between industry sponsorship and efficacy even less likely. . .

Comments (3) + TrackBacks (0) | Category: Clinical Trials | The Central Nervous System

March 14, 2008

Pen and Paper

Email This Entry

Posted by Derek

Registering some new compounds for testing, as I’ve been doing recently, has me thinking about how that was done when I started at my first company. This was in the fall of 1989, so while it’s not exactly the Ancient Old Days, it’s not last week, either. (There are plenty of readers here who go back further). But as far as the technology involved, it looked a lot closer to 1950 than it does to today.

For one thing, I saw the tail end of the Bare Desk Era: we didn’t have computers on our desks - at least, most of us didn’t. I found that a bit strange when I joined – not outrageous, as it would have been just two or three years later, but a little disappointing. Some scientists at the PhD level shared computers, but I started out not even doing that. In that company, in those days, those machines were Macs. (After a long PC interregnum, I’m working in a Mac environment again these days, which is fine by me). I didn’t even have a shared computer at first; when I finally got a part of one, it was a Mac IIcx, which these days hardly seems like something you could even use to archive your tuna salad recipes. Of course, you could wander around at that point and still see Mac SEs in use out in the biology labs, so everything was (is) relative. I thought the IIcx was a fine machine; even half of one was a lot better than a bare desk.

The lack of computers was official policy. The way I heard it put was that management wanted us in front of our hoods, not in front of our screens. Had they only known about web surfing, their fears would have been confirmed but good. They'd have needed a fortune teller, though, since there was no web to waste any time on in 1989. (I remember using Telnet from my home machine in late 1991 or early 1992 to go look at this hypertext thingie at CERN that I’d read about, and I distinctly remember the odd sensation when the welcome screen scrolled up, as if I’d suddenly traveled to Geneva).

No computers meant no e-mail, of course. That came along within a couple of years, but I got a similar brief exposure to the pre-electronic workplace, where those office mail slots down the hall were where you got printed notices of the meetings you needed to attend. Papers you needed to read or documents you needed to have came in those brown envelopes with the string closures, one of which now shows up in my current mail slot every three weeks or so. And no computers meant no online registration of new compounds. That you did with a paper form.

And not with just any form. This one had multicolored layers, and was made out of that pressure-sensitive paper with the odd feel to it. You pressed hard as you drew your structure with a ballpoint pen in the box provided – the yellow copy at the bottom of the stack was for your files, and you wanted to be able to read the thing if there was a problem with the registration. Below was an area with multiple check boxes for the different assays. That was a bit out of date even when I got there – the company had printed up piles of these things with all the assays that they typically ran, but as cloned receptors and the like became available, the assays were beginning to change faster than paper forms could keep up.

Then you took your forms and the corresponding vials and walked them over a couple of buildings to turn them in. In a few days, you’d get a printout of your compound by interoffice mail, with its structure now re-entered into some sort of mainframe database (probably with one of those Calcomp or Summagraphics drawing tablets). My first compound had a registration number in the high thirty-thousands; this in a company that had been around since the Second World War. By the time I left, eight years later, the registration numbers were over twice that figure and climbing fast, and that didn’t count the separate libraries that had been purchased along the way.

The project I was on generated a lot of data, but there was no central place for all of it. The people who ran the assays rated desktop computers of their own, and they kept the numbers there, in whatever format suited them. One biologist retired on us, and when we needed his assay data a few years later, it turned out that no one could put their hands on his files. Everyone, it seemed, had figured that someone else was taking care of that. In the end, a note went out for everyone to root out their old meeting handouts from 1990, since those had his presentations of the assay numbers – those would have to do until we could get the compounds re-run. Even at the time, it occurred to me that this was no way to handle data.

Comments (12) + TrackBacks (0) | Category: Drug Industry History

March 13, 2008

Pfizer vs. the NEJM: A Legal Showdown

Email This Entry

Posted by Derek

Today (March 13) at 3 PM EST, there's a hearing scheduled on a legal motion that could change the way scientific results are published in this country. Pfizer is being sued over injuries that plaintiffs believe came from their use of Celebrex, one of the world’s only remaining Cox-2 inhibitor drugs. (I saw a Celebrex tv ad the other day, a surreal thing which was basically a lengthy recitation of FDA-mandated side effect language accompanied by jazzy graphics). Everyone with a Cox-2 compound is being sued from every direction, as a matter of course. The company is, naturally, casting around for any weapon that comes to hand for its defense, as did Merck when that same sky began to come down on them.

But Pfizer’s lawyers (DLA Piper LLP of Boston) are apparently (your choice, multiple answers permitted) more aggressive, more unscrupulous, or more clueless than Merck’s. Among the points at issue are several papers from the New England Journal of Medicine. According to the motion, which I paid to download from PACER, two of the particularly contentious ones are this one on complications after cardiac surgery and this one on cardiac risk during a colon cancer trial. So Pfizer has served the journal’s editors with a series of subpoenas. They’re seeking to open the files on these manuscripts – reviewer comments, reviewer names, editorial correspondence, rejected submissions, the lot. What are they hoping to find? Oh, who knows – whatever’s there: ”Scientific journals such as NEJM may have received manuscripts that contain exonerating data for Celebrex and Bextra which would be relevant for Pfizer's causation defense” say the lawyers. The journal refused to comply, so Pfizer has now filed a motion in district court in Massachusetts to compel them to open up.

What's particularly interesting is the the journal has, to some extent, already done so. According to Pfizer's "Motion to Compel", the editors "produced a sampling of forms identifying the names of manuscript authors and their financial disclosures, correspondence between NEJM editors and authors regarding suggested editorial changes and acceptance and rejection letters". The motion goes on to say, though, that the editors had the nerve to ignore the broader fishing expedition, only releasing documents for authors specifically named in the subpoenas, not "any and all" documents related to Celebrex or Bextra. They also withheld several documents under the umbrella of peer review and internal editoral processes. Thus, the request to open up the whole thing.

I’ve never heard of this maneuver before. Staff members of the NEJM gave depositions in the early phases of the Merck litigation, since the journal was in the middle of the Vioxx fighting. (They’d “expressed concern” several times about the studies that had appeared in their own pages and passed through their own review process). But even then, I don’t think that Merck wanted to open up the editorial files, and you’d think that if anyone had something to gain by it, they would.

Pfizer’s motion seems to me more like a SLAPP, combined with standard fishing expedition tactics. Their legal team doesn’t seem to think that any of this will be a problem, at least as far as you can tell from their public statements. They say in their motion that they don’t see any harm coming to the NEJM if they comply – heavens, why not? Reviewers will just line up to look over clinical trial publications if they think that their confidentiality can be breached in case of a lawsuit, won’t they? And the rest of the scientific publishing world could look for the same treatment, any time someone published data that might be relevant to someone’s court case, somewhere. Oh, joy.

Pfizer’s motion states that ” The public has no interest in protecting the editorial process of a scientific journal”. Now, it’s not like the peer review process is a sacred trust, but it’s the best we’ve been able to come up with so far. It reminds me of Churchill’s comment about democracy being the worst form of government until you look at the alternatives. I realize that it’s the place of trial lawyers and defense teams to scuffle around beating each other with whatever they can pick up, but I really don’t think that they should be allowed to break this particular piece of furniture.

And I can’t see how the current review process won’t get broken if Pfizer’s motion is granted. The whole issue is whether the journal's editors can claim privilege - if so, they don't have to release, and if not, they most certainly do. This can't help but set a precedent, one way or another. If there's no privilege involved in the editorial process, a lot of qualified and competent reviewers will start turning down any manuscript that might someday be involved in legal action. (Which, in the medical field, might be most of them). The public actually does have an interest in seeing that there is a feasible editorial process for scientific journals in general, and I hope that the judge rules accordingly.

In the meantime, for all my friends at Pfizer and for all the other scientists there with integrity and good sense: my condolences. Your company isn’t doing you any favors this week.

(One of the first mentions of all this was on the Wall Street Journal’s Health Blog. The comments that attach to it are quite interesting, dividing between the hands-off-peer-review crowd and a bunch of people who want to see the NEJM taken down a few pegs. I can sympathize with that impulse, but there has to be a better way to do it than this. And there’s more commentary from Donald Kennedy, editor of Science, here (you can pretty much guess what he thinks about this great idea).

Comments (18) + TrackBacks (0) | Category: Cardiovascular Disease | The Scientific Literature | Toxicology | Why Everyone Loves Us

March 12, 2008

Taranabant in Trouble?

Email This Entry

Posted by Derek

Well, I wish I hadn’t been right about this one. Last month I spent some time expressing doubts about Merck’s new obesity drug candidate taranabant, a cannabinoid-1 ligand similar to Sanofi-Aventis’s failed Acomplia (rimonabant). S-A ran into a number of central nervous system side effects in the clinic, and although they’ve gotten the drug approved in a few markets, it’s not selling well. US approval, now long delayed, looks extremely unlikely.

I couldn’t see why Merck wouldn’t run into the same sort of trouble. If a report from a Wall St. analyst (Aileen Salares of Leerink Swann) is correct, they have. Merck’s presenting on the compound at the next American College of Cardiology meeting (at the end of this month in Chicago), and information from the talk has apparently leaked out in violation of the ACC's embargo. There appears to be some difficulty both on the efficacy and side effect fronts – bad news all around.

The company was aiming for a 5% weight loss, but only reached that at the highest dose (4 mg). The report is that CNS side effects were prominent at this level, twice the rate of the placebo group. The next lower dose, 2 mg, missed the efficacy endpoint and still seems to have shown CNS effects. According to Salares, nearly twice the number of patients in the drug treatment group dropped out of the trial as compared to placebo, citing neurological effects which included thoughts of suicide.

While there’s no confirmation from Merck on these figures, they’re disturbingly plausible, because that’s just the profile that got rimonabant into trouble. If this holds up, I think we can say that CB-1 ligands as a CNS therapeutic class are dead, at least until we understand a lot more about their role in the brain. Two drugs with different structures and different pharmacological profiles have now run into the same suite of unacceptable side effects, and the main thing they have in common is CB-1 receptor occupancy. There’s always the possibility that a CB-1 antagonist (or inverse agonist) might have a use out in the periphery – they could have immunomodulatory effects – but anyone who tries this out would be well advised to do it with a compound that doesn’t cross the blood-brain barrier.

And as for taranabant, if the data are as reported I don’t see how Merck can get this compound through the FDA. Even if they did, by some weird accident, I don’t see why they’d pull the pin on such a potential liability grenade. Can you imagine what the labeling would have to look like in order to try (in vain, most likely) to insulate the company from lawsuits? That makes a person wonder how on earth the company could have been talking about submitting it for approval later this year, which is what they were doing just recently. They must have had these numbers when they made that statement – wouldn’t you think? And they must have immediately realized that this would be trouble – you’d think. If that Leerink Swan report is correct, the company’s recent statements are just bizarre.

Comments (32) + TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity | The Central Nervous System | Toxicology

March 10, 2008

Fill Out Your Pharma Brackets

Email This Entry

Posted by Derek

A reader called my attention to this alarming but weirdly fascinating graphic over at the Wall Street Journal's Health Blog. It's a March tournament bracket for the drug industry, but the winning team takes over the loser. Of course, the thing that makes it so spooky is that all the second-round matchups they show are fairly plausible (and have, in fact, all been rumored at one time or another).

The whole thing is prompted by some investment banks calling for Pfizer to do some big deal to shore up their numbers, which is just the sort of thing you'd expect a bunch of investment bankers to come up with. Business is slow these days, you know, and a big ol' deal would be just what the doctor ordered.

If you work through the whole thing, you end up with. . .well, you end up with Pfizer most of the time. It's like one of those pool sheets where you find yourself putting money on a team you don't really care for. The comments to the post are worth reading, too - my favorite line might be "Problem with Pfizer is that they haven’t the foggiest idea about what they are doing. . ."

Comments (17) + TrackBacks (0) | Category:

Hits, Misses, and Some More Misses

Email This Entry

Posted by Derek

There’s an article in the latest Nature Reviews Drug Discovery on recent drug attrition rates that caught my eye. The authors are looking over 2006-2007 trials and approvals, comparing the biotech industry with traditional pharma. ("Biotech" is defined as a company that's included in either the American Stock Exchange's biotech index and/or the NASDAQ's). In that period, the biotechs scored 47 FDA approvals (45% of the total approvals), but had 68 Phase III failures, which is 74% of that total. Pharma companies had only 5 Phase III failures during that stretch – the other 18 were biotech/pharma joint ventures, and those had a corresponding 16 approvals.

That’s food for thought, all right. The authors make much of the comparatively higher success rate for the biotech/pharma alliance compounds versus the biotechs that went it alone. I have to say, though, that the first explanation that came to my mind was one that they mention, but refer to as “cynical”: that the products which got partnered were disproportionately drawn from the list of those more likely to succeed in the first place.

But is “higher success rate for alliances” really the way to look at the data? Coming at the figures from another direction, I’d argue that “lower success rate for anything labeled biotech” would be a better fit. After all, the FDA approval/Phase III failure numbers are 47/68 for biotech, and 16/18 for biotech/pharma codevelopment, and I’d argue that those ratios are a lot closer to each other than either one is to the ratio for pure pharmaceutical companies, which was 36/5. Look at it this way: if the biotech-alone success rate was as good as the alliance one, you’d expect maybe 53 failures for those 47 successes instead of the 68 that really took place. But if biotech had the same success rate as pharma alone, those 47 winners would have been accompanied by only about 7 failures.

Cynics with a different orientation might wonder if the higher failure rate comes from a higher number of attempts on innovative drugs in biotech, as opposed to follow-ups and me-toos. But looking at another table in the same paper, where the authors split such compounds out, the me-too data in the pharma industry shows 15 FDA approvals versus 1 Phase III failure. The corresponding biotech figures show 20 approvals and 17 failures, so even the follow-on drugs have a harder time of it. (In case you're wondering, the figures from the opposite end of the spectrum, the new compound/new indication class, are 17 approvals versus 4 failures for pharma, as opposed to a toe-curling 9 approvals and 42 failures for biotech). Breaking down the numbers in another way, biotech companies had 37 out of 115 compounds in the me-too class (32%), while pharma had 16 out of 41 (39%), which isn't that big a difference.

This sort of thing is particularly interesting for someone of my age or older, because it brings back memories of the 1980s and the first big biotech boom, back when Genentech and Biogen went public and Cetus was still a going concern. The pitch back then was that biotech products were actually going to have a higher success rate, because they were, after all, mostly proteins that were already in use by the body, right? The definition of "biotech" has changed a lot since then, though - if you look at those companies in the two indices linked above, you'll notice that many of them don't work on biological products at all, but would be better classified as "small pharma". But I'm not sure if the general public appreciates that distinction. . .

Comments (29) + TrackBacks (0) | Category: Business and Markets | Clinical Trials | Drug Industry History

March 7, 2008

Dissolve Your Troubles Away

Email This Entry

Posted by Derek

Hang around any drug discovery organization and you’ll hear complaints about how the drug candidates don’t dissolve well. The people who test the compounds on cells and proteins complain a bit about this, and the ones who test on mice and rats complain even more. Traditionally, the problem eventually lands on the lab benches of the people who work out formulations, who complain that by the time it gets to them that there’s only so much than can be done. So over the years, it’s become more of a concern for the chemists who make the things in the first place, as I guess it should.

Solubility isn’t the single most important factor in making drug candidates, but you can’t ignore it, either. Having a drug that dissolves well frees you up during development. Whenever you get low or variable blood levels while testing a new compound in animals, you always wonder if the compound was dissolving in the gut properly. If the answer is already known to be “Yes”, then you can concentrate on the other potential problems. (That said, solubility doesn’t correlate with good blood levels as well as you might imagine, because of those other factors. Awful solubility correlates pretty well with awful blood levels, though).

There are other virtues: a soluble compound is also a lot easier to dose i.v., which is a valuable stage in figuring out how it’s being distributed in whole animals. And getting into the clinic is hard enough without having to worry about how you’re going to dose the first human volunteers, and whether a temporary fix for the problem (a “service formulation”) will provide relevant data or hold up at all as you go on into Phase II. There are, to be sure, some valuable drugs with absolutely horrible solubility problems (taxol comes immediately to mind), but you'd rather not find yourself competing with it for the title.

But solubility, as a word, conceals several different behaviors. It comes down to how much the compound likes to associate with itself versus how much it likes to associate with solvent. Those two values can vary pretty independently, and you get different situations as they slide up and down. In the case of a drug formulation, that solvent is going to be as watery as feasible, so here’s how things break down:

Low self-affinity and low aqueous affinity: the first value will give you an oil or a low-melting solid, and the second will give you trouble going into solution. We try to avoid this category if possible, although you can always formuate as some sort of oil-filled gel cap if you’re really up for it, as with Vitamin E.

Lower self-affinity and higher aqueous affinity: Depending on the absolute values here, this could be low-melting again. But this time it’ll hop right into water, because it’s actually happier there than it is in its own crystal form. Formulation should be a breeze, but the problem with these guys is that they’ll soak water right out of the air and turn into goo if you don’t watch out.

High self-affinity and lower aqueous affinity: here’s where you run into trouble, and here, unfortunately, is where a lot of med-chem drug candidates land. The first value will give you a high melting point – the crystal’s very happy the way it is, thanks, and would rather not give up its structure. And water has a hard time competing. This is where the formulations people really get a workout – in a future post we’ll talk about some of the tricks used in this situation. Sometimes the chemists can fix things by making one part of the molecule lumpier – literally – so that the structure doesn’t pack so well into a crystal form.

High self-affinity and high aqueous affinity: depending on the absolute values again, this could be tricky. There are some high-melting solids that dissolve in water just fine: ionic substances like table salt make great crystals, but their interactions with water are even more favorable. But you can also end up with a compound that will stay in water, but has trouble going into water. Once the molecules are surrounded by water, they’re happy, but those first few water molecules have a tough time pulling each drug molecule out of the crystal surface. If you grind one of these guys up really fine and stir it for three days, you’ll probably get a reasonable solution, but at first glance you’d take it for a compound from the previous class. All the more reason to make sure you're at equilibrium before drawing any conclusions.

So that’s a quick look at solubility, and a quick look at the range that a medicinal chemist has to think about: from picturing molecules stacking one by one into a crystal, to picturing a drug candidate gumming up a syringe held by a muttering, red-faced pharmacologist.

Comments (17) + TrackBacks (0) | Category: Drug Development

March 6, 2008

Fakery And Its Ends

Email This Entry

Posted by Derek

Thinking about that plagiarizing Indian professor brings up the same thought I always have in these situations: what on Earth is going through the heads of these people?

I can tell you, honestly, that I have never faked any data. (That phrase makes me remember, though, that one of the most crazed fabulists I’ve ever known started a good number of his sentences with the phrase “I tell you honestly”). I would feel nervous and guilty about making up so much as an NMR coupling constant – I freely admit to having put down “10 Hz” for something that might well be 9 on closer inspection, but making it up without having even looked? No way. It’s not like I have a halo over my head, but hey, these things are real numbers that people can check. You’d think that if a person feels the need to lie about things that they’d pick something else to lie about. I can see telling people that the check is in the mail, or that yes, I did indeed read every word of your insightful memo, but I can’t see telling someone that I made some compound that I didn’t make.

So, then, faking up a whole publication? How can you do that and sleep at night? Even if it’s just some obscure analytical method, published in a journal that no one has ever read an issue of front to back, how can you do that? Well, then, how about sixty or seventy of the damn things over a period of a few years – that’s what this guy did, after all.

And I think that, other than the (to me) incomprehensible mental angle, what I feel about this sort of thing is anger. Although I work in a very applied research field, I think that scientific research is generally a good thing in and of itself. I’m signed up with Francis Bacon and his program “for the effecting of all things possible”. (Peter Medawar's thoughts on this are well worth reading). So this sort of cynical fakery really gets to me, because it’s the work of someone who, in the end, figures that science and data are just stuff to use to get what you want. They’ve no intrinsic value. It’s not like anyone cares, right?

It’s like watching a pastry shop mix ground cardboard into their muffins – hey, you get more muffins that way, and what good are the damn things anyway if not to unload them on the idiot customers for cash? So for anyone who came to Chiranjeevi’s work for anything useful (God help ‘em), well, his message to you is to stick it in your ear. “Useful for you” isn’t anything he cares about. What he’s interested in, of course, is “useful for him”, and that’s what the whole enterprise of science comes down to for someone like this: a means to an end. And what mighty end is that? Why, advancement at Sri Venkateswara University, of course. And some pocket money. And a longer CV. Noble stuff, isn’t it?

Comments (17) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature

March 5, 2008

Smaller, Wetter, Harder to Work With

Email This Entry

Posted by Derek

There’s an interesting article coming out in J. Med. Chem. on antibiotic compounds, which highlights something that’s pretty clear if you spend some time looking at the drugs in that area. We make a big deal (or have made one over the last ten years) about drug-like properties – all that Rule-of-Five stuff and its progeny. Well, take a look at the historically best-selling antibiotic drugs: you’ve never seen such a collection of Rule of Five violators in your life.

That’s partly because a lot of structures in that area have come from natural products, but hey, natural products are drugs, too. Erythromycin, the aminoglycosides, azithromycin, tetracycline: what a crew! But they’ve helped an untold number of people over the years. It’s true that the fluoroquinolones are much more normal-looking, but those are balanced out by weirdo one-shots like fosfomycin. I mean, look at that thing – would you ever believe that that’s a marketed drug? (And with decent bioavailability, too?)

No, you have to be broad-minded if you’re going to beat up on bacteria, and I think some broad-mindedness would do us all good in other therapeutic areas, too. I don’t mean we should ignore what we’ve learned about drug-like properties: our problem is that we tend to make allowances and exceptions on the greasy high-molecular weight end of the scale, since that’s where too many of our compounds end up. It wouldn’t hurt to push things on the other end, because I think that you have a better chance of getting away with too much polarity than you have of getting away with too little.

One reason for that might be that there are a lot of transporter proteins in vivo that are used to dealing with such groups. It’s easy to forget, but a great number of proteins are decorated with carbohydrate residues, and they’re on there for a lot of reasons. And a lot of extremely important small molecules in biochemistry are polar as well – right off the top of my head, I don’t know what the logD or polar surface area of things like ATP or NAD are, but I’ll bet that they’re far off the usual run of drugs. Admittedly, those aren’t going to reach good blood levels if you dose them orally; we’re trying to do something that’s rather unnatural as far as the body’s concerned. But we could still usefully take advantage of some of the transport and handling systems for such molecules.

But that’s not always easy to do. We all talk about making our compounds more polar and more soluble, but we balk at some of the things that will do that for us. Sure, you can slap a couple of methoxyethoxys on your ugly flat molecule, or hang a morpholine off the end of a chain to drag things into the water layer. But slap five or six hydroxyls on your molecule, and you’ll be lucky not to have the security guards show up at your desk.

There are, to be sure, some good reasons why they might. Hydroxyls and such tend to introduce chiral centers, which can make your synthesis difficult and dramatically increase the amount of work needed to fill out the structural possibilities of your lead series. That’s why these things tend to be (or derive from) natural products. Some bacterium or fungus has done most of the heavy lifting already, both in terms of working out the most active isomers and in synthesizing them for you. Erythromycin’s a fine starting material when you can get it by fermentation, but no one would ever, ever consider it if it had to be made by pure total synthesis.

There’s another consideration, which gets you right at the bench level. For an organic chemist, working with charged, water-soluble compounds is no fun. A lot of our lab infrastructure is built for things that would rather dissolve in ethyl acetate than water. A constant run of things with low logD values would mean that we’d all have to learn some new skills (and that we’d all probably have to spend a lot of time on the lyophilizer). Ion-exchange resins, gel chromatography, desalting columns – you might as well be a biochemist if you’re going to work with that stuff. But in the end, perhaps we might be better off, at least part of the time, if we were.

Comments (13) + TrackBacks (0) | Category: Drug Industry History | In Silico | Infectious Diseases

March 4, 2008

Off Target? Which Target Did You Mean?

Email This Entry

Posted by Derek

Here's a snapshot for you, to illustrate how little we know about what many of our compounds can do. I was browsing the latest issue of the British Journal of Pharmacology, which is one of many perfectly respectable journals in that field, and was struck by the table of contents.

Here, for example, is a paper on Celebrex (celecoxib), but not about its role in pain or inflammation. No, this one, from a group in Turin, is studying the drug's effects on a colon cancer cell line, and finding that it affects the ability of the cells to stick to surfaces. This appears to be driven by downregulation of adhesion proteins such as ICAM-1 and VCAM-1, and that seems to have nothing particular to do with COX-2 inhibition, which is, of course, the whole reason that Celebrex exists.

This is a story that's been going on for a few years now. There's been quite a bit of study on the use of COX-2 drugs in cancer (particularly colon cancer), but that was driven by their actual COX-2 effects. Now it's to the point that people are looking at close analogs of the drugs that don't have any COX-2 effects at all, but still seem to have promise in oncology. You never know.

Moving down the list of papers, there's this one, which studies a well-known model of diabetes in rats. Cardiovascular complications are among the worst features of chronic diabetes, so these folks are looking at the effect of vascular relaxing compounds to see if they might provide some therapeutic effect. And they found that giving these diabetic rats sildenafil, better known as Viagra, seems to have helped quite a bit. They suggest that smaller chronic doses might well be beneficial in human patients, which is definitely not something that the drug was targeted for, but could actually work.

And further down, here's another paper looking at a known drug. In this case, it's another piece of the puzzle about the effects of Acomplia (rimonabant), Sanofi-Aventis's one-time wonder drug candidate for obesity. It's become clear that it (and perhaps all CB-1 compounds) may also have effects on inflammation and the immune system, and these researchers confirm that with one subtype of blood cells. It appears that rimonabant is also a novel immune modulator, which is most definitely not one of the things it was envisioned as. Do the other CB-1 compounds (such as Merck's taranabant) have such effects? No one knows, but it wouldn't come as a complete surprise, would it?

These are not unusual examples. They just serve to show how little we understand about human physiology, and how important it is to study drugs in whole living systems. You might never learn about such things by studying the biochemical pathways in isolation, as valuable as that is in other contexts. But our context in the drug industry is the real world, with real human patients, and they're going to be surprising us for a long time to come. Good surprises, and bad ones, too.

Comments (8) + TrackBacks (0) | Category: Cardiovascular Disease | Diabetes and Obesity | Drug Development | Toxicology

March 3, 2008

Big Steaming Heaps of Fraud

Email This Entry

Posted by Derek

Since I had a blog entry here recently talking about plagiarism, I thought I should point out a whopping case of it that’s come to light. One Pattium Chiranjeevi, a professor of chemistry at Sri Venkateswara University in Triupati, India, has been accused of cranking out dozens of forged publications over the last few years.

I don’t see how there can be any doubt about the guy. He published 60 or 70 papers in under four years, which is enough to make you wonder right there. Unless you’ve got a monster research group, and you’re constantly breaking everything down into the tiniest bites and repeating lots of stuff to boot, that’s just not possible. But these papers, mostly on analytical methods development, are just too similar to things that were already in the literature. Elsevier has already retracted thirteen papers from the list, and no doubt other publishers are working on doing the same. A panel at his university has concluded that he plagiarized data and included “unjustified co-authors”. My favorite part of the whole affair is that some of his publications include data from instruments that don’t even exist at SVU.

We owe P. K. Dasgupta at UT-Arlington for catching on to all this. As detailed here in C&E News, he realized that one of Chiranjeevi's papers sent in for review was identical to something he'd seen last year. Well, mostly identical - Chiranjeevi had gone so far as to substitute the word "arsenic" for the word "chromium", but other than that demanding find-and-replace job, the manuscripts were identical. That should give you some idea of the level this guy was working on. Interestingly, he doesn't seem to show up in that Deja Vu database I linked to earlier, even though some of the journals he published in are in PubMed - is this because of these sorts of word games?

Science managed to get ahold of Chiranjeevi for comment, and his response does not inspire visions of a man unjustly accused. He blames colleagues and journal editors for the whole thing, says the charges are “baseless”, and (you won’t see this one coming) says that he plans to take action in an “international court of justice” to clear his name. Science left that last phrase in quotes, too, even though it’s a perfectly recognizable English term, which is the equivalent of putting “sic” after it: “That’s really what he said, folks; we’re not making that one up”. What sort of person starts blowharding (no offense!) about international courts of justice in a situation like this? Quite possibly the sort of maniac who’s capable of, well, plagiarizing up a new publication every three weeks or so without even bothering if the experimental section includes equipment that he’s ever seen or used. What goes through the heads of these people is a mystery that the rest of the population may never solve.

That Science news article tries to tie this to the recent scandals in stem cell research and low-temperature physics, but I don’t think the comparison holds up. For one thing, those two weren’t plagiarism, but featured results that had been completely made up. And they were spectacularly focused on hugely popular fields of research while Chiranjeevi’s papers are small and relatively obscure. It’s doubtful that anyone was led down the wrong path by reading them – in fact, it’s doubtful if anyone read them to any great extent at all, which is how something like this can go on so long. These sorts of papers are specialized reference material, not breaking news. Actually, it makes more sense to plagiarize that kind of work than to claim to have performed groundbreaking work in stem cells or superconductivity. If Chiranjeevi had cut back to a few papers per year, he probably could have made a career out of it. For some values of the word “career”.

Note: if I'm lucky, maybe one of the professor's defenders (!) will show up in the comments section, as one seems to have here and here!

Comments (28) + TrackBacks (0) | Category: The Dark Side | The Scientific Literature