About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Monthly Archives

August 31, 2004

Me Too, Part Two

Email This Entry

Posted by Derek

As came up in the comments to the previous post, there's not as much price competition inside a given drug category as you'd think. That's not because we're Evil Price Gougers, at least not necessarily. As I was pointed out yesterday, "me-too" type drugs aren't as equivalent as some people think. The main reason we go ahead with a drug in a category where there's already competition is because we think we have some advantage that we can use to gain market share.

This is a constant worry in every drug development effort where there's already a compound out there. I've personally, many times, been in drug project meetings where we've looked at the best competing compound (one that's either already marketed or well into clinical trials) and said "We haven't beaten them yet. We're not going to make it without some kind of unique selling point." The best of those, naturally, would be superior efficacy or a superior safety profile. Then you have easier dosing, fewer interactions with other drugs, and so on. I need to emphasize this: I have seen drug projects killed because no case for an advantage could be made.

Now, there's room to argue about how much better efficacy a drug needs to be a real advance in the field, or at least a bigger seller. You can argue about any of those possible advantages I listed, and it's true that drug companies push some compounds that aren't exactly huge leaps over the previous state of the art. (You see more of that when there's a case of shriveled pipeline in progress.) But there has to be something, and the bigger the difference, the better it is for us. We're motivated, by market forces, to come up with the biggest advances we can. The sales force would much, much rather be out there with data to show that the new drug beats the competition in a clean fight, as opposed to saying that it beats the old one on points, in a subset of patients, if you massage the data enough and squint hard, and besides it tastes better, too. . .

And as I've pointed out before, we often find out things about compounds long after they've reached the market. Lipitor, as discussed yesterday, is a case in point. I have not been a Lipitor fan in the past. The statin field seemed already pretty well served to me (as it did to a number of people inside Warner-Lambert during the drug's development, frankly.) The drug made its way forward based on efficacy in the clinic: it seemed to do a better job lowering cholesterol and improving the LDL/HDL ratio. How much advantage that is in the long term is another question, but those are the best markers we have.

The whole antiinflammatory c-reactive-protein story about the drug only came up after it was already on the market. The marked differences between it and the other statins, which I have to assume at this point are real, are a pleasant surprise to everyone involved. Warner-Lambert (and then Pfizer) thought it was a better compound, but not to this degree or for these reasons, I'l bet. I'd say that this is another argument for having multiple drugs in the same category. We don't, and can't, know everything that they'll do.

Comments (2) + TrackBacks (0) | Category: "Me Too" Drugs | Cardiovascular Disease | Clinical Trials | Drug Development | Drug Prices

Clinical Trials And What to Do With Them

Email This Entry

Posted by Derek

Allow me to get a little defensive. If I understand some of the critics of my industry, we spend most of our time making "me-too" ripoff drugs rather than doing something that provides any clinical benefit to patients. And, if I have this right, here's how we determine efficacy: we run clinical studies until we get the answer we want, and then we bury all the other ones. (Mind you, we bury the data by giving it to the FDA, but stay with me here.)

OK, now let's try to explain this. Merck has just released a study on its statin drug, Zocor. Following in the footsteps of two other studies with Pfizer's statin, the market-leading Lipitor, Merck dosed patients who had just suffered heart attacks. Lipitor treatment seemed to show a real benefit in these situations, lowering the rate of later cardiovascular trouble, and Merck was hoping for (and no doubt expecting) the same thing.

But they were rudely surprised. At the lower doses of Zocor, they failed to show any benefit at all. And at the highest dose, while they managed to show a lower rate of second heart attacks, they still didn't reach significance versus the placebo group. Worst of all, several of the high-doses patients showed the muscle-weakening condition rhabdomyolosis. That's the bane of statin drugs, and the reason why Bayer pulled their compound (Baycol) from the market. (Just to complicate things, one of Merck's placebo patients showed rhabdomyolosis, too, which is food for thought and should give you an idea of how much fun it is to interpret clinical trial data.)

So what's going on here? Zocor and Lipitor both work by inhibiting HMG-CoA reductase. They hit the same mechanism. Were the patients different? The study's authors say it's possible. The patients in the Lipitor studies seem to have been receiving more aggressive therapy in addition to the drug. Are the drugs different? That's possible, too. Lipitor, as it turns out, seems to lower the inflammation marker C-reactive protein much more than Zocor, and that could potentially make a difference.

But if the drugs are really different, what happens to the idea that Lipitor is just a me-too, yet another statin piling on the profits? If we in the industry hadn't kept banging away at these drugs, we wouldn't have ever known that better ones could be found. Would we? As I've pointed out in the past, if you're going to market a drug in a category where the competition is ahead of you, you'd better have some improvement to point at or set about finding one. Lipitor came into the market under the banner of "lower dose / higher efficacy", and it may be picking up more advantages as time goes on.

Now, if we believe that the drugs aren't different, which will be an interesting thing to try to prove at this point, then we have to figure out how much weight to put on this study. How does it go into the Great Clinical Trial Repository? With an asterisk? Then shouldn't the earlier two studies with Lipitor have one, too? This is the same situation I spoke of before.

And what about this clinical trial data in general? Isn't this the sort of bad news that we're supposed to be sweeping under the rug over here? A full article in JAMA complete with vigorous editorial commentary. . .some rug. Oh, and one other thing: those two earlier Lipitor studies that showed a benefit. One of them was from Pfizer(/Pharmacia), as you'd expect. But the other one was from their competition. Bristol-Meyers Squibb has been trying to prove that their statin, Pravachor, is better than Lipitor, and failing. Where's that damn rug when you need it?

Comments (5) + TrackBacks (0) | Category: "Me Too" Drugs | Cardiovascular Disease | Clinical Trials

August 29, 2004

. . .It's a Wonder I Can Think At All

Email This Entry

Posted by Derek

When I think back on all the things I learned in grad school. . .well, let's say that not all of it has come in handy. Chemistry, like all the other sciences, long ago split into all kinds of sub-specialties, so it's no wonder that I haven't had to worry much about Tanabe-Sugano diagrams (to pick a representative example from inorganic chemistry.) Nor has normal coordinate analysis featured much in my work of the past twenty years, since I'm not some sort of theoretical spectroscopist. And, thank God, I haven't had to sit down and do any quantum mechanics since the day of my final exam in that well-known rite-of-passage course.

But I haven't had to use things that are nominally in my area of expertise, either. Electron spin resonance, for example - that's something that free radical chemists care about, but I did a whole post-doctoral year doing free radical chemistry, and never did the subject come up. Makes me glad that I didn't spend any more time learning it.

How about chiral aldol chemistry? That's close to home. It's organic synthesis, my very own subfield, and it was the subject of the first question I was asked during my PhD orals. Have I ever done a chiral aldol reaction? Not a one, and I don't have any plans to. Was my time well spent learning all the various theories about how they work? Doubts have crept in.

Moving even closer to how I earn my living, how about all those med-chem graphs and equations they try to teach you? My first year in the business, they sent me (and a number of other folks) off to a well-known summer short course in medicinal chemistry, to teach us the ropes. Now, I can't pretend that I didn't learn anything useful there, although it was all material that I was going to learn anyway. But those equations, those fine equations for pharmacokinetic behavior, for clearance and absorption and distribution. . .I haven't had call for one of them since.

I mean, I think about those phenomena all the time, but not in mathematical terms. The real systems are just too messy for that, and most of the time we don't understand what's going on, anyway. I can just see myself back in that classroom, copying these things down. Did I have the suspicion right there that I'd never write them down again, or did that take a little while longer?

Comments (2) + TrackBacks (0) | Category: Graduate School

August 26, 2004

Things I Won't Work With: Polyazides

Email This Entry

Posted by Derek

The azide group (three nitrogens bonded together in a row, for the non-chemists in the crowd) has several personalities. Unfortunately, most of them are hostile. Azide anion, as you find in sodium azide, is pretty toxic. It shuts down several important enzymes, and it's often used in biology labs as a general metabolic poison.

Covalent azides are a different sort of beast. Having something directly bonded to the group stops it from being an enzyme-killer, for the most part, but you have a new problem to worry about: explosiveness. In general, reasonably high molecular weight azides are OK to handle (e.g., the early anti-HIV drug azidothymidine). I've made some of that sort, since azide displacement is a classic (and useful) way to get a nitrogen into your molecule. But the smaller ones aren't worth the risk.

That's because the higher the percentage of nitrogens in the formula, the more you have to worry. Thermodynamically, nitrogens bonded to each other are always regarded as guilty until proven innocent - there's always the fear that they're going to find a way to throw off their civilized clothes and revert to wild nitrogen gas. That's a hugely stable compound. If your structure goes that route, all that extra bonding energy it used to have ends up diverted into flying shrapnel and loud noises.

A few years ago I saw some Israeli escape artists has prepared triazidomethane, which I wouldn't touch with somebody else's ten-foot titanium pole. One carbon, one hydrogen, and nine nitrogens - look at the time! Gotta run! But there's always worse. Just today I was reading a soon-to-be-published paper in Angewandte Chemiefrom some daredevils at USC. They've prepared titanium tetraazide, of all things. One titanium and twelve nitrogens: whoa! Podiatrist appointment! See you later!

You can isolate the stuff, it seems, as long as you handle it properly. It turns out that brutal treatments like, say, touching it with a spatula, or cooling down a vial of it in liquid nitrogen - you know, rough handling - make it detonate violently. I think that staring hard at it is OK, though. The authors recommend using everything you have for protection if you're zany enough to follow their lead: goggles, blast shield, face shield, leather suit (!) and ear plugs. Those last two suggestions are unique in my experience, and quite. . .evocative of what you have to look foward to with these compounds. (We don't have any leather suits around where I work, although I'm sure I'd look dashing in one.)

Some of the folks on the paper have a joint appointment with an Air Force missile propulsion research lab. They've found a home. Me, I'll be way over here.

Comments (7) + TrackBacks (0) | Category: Things I Won't Work With

August 25, 2004

Will the Uncommon Work for the Common Good?

Email This Entry

Posted by Derek

Yochai Benkler of the Yale Law School has an interesting policy article in a recent issue of Science. It's on the "Problems of Patents", and he's wondering about the application of open-source methods to scientific research. He has two proposals, one of which I'll talk about today.

In some sort of ideal world (which for some folks also means Back In The Good Old Days of (X) Years Ago), science would be pretty much open-source already. Everyone would be able to find out what everyone else was working on, and comment on it or contribute to it as they saw fit. In chemistry and biology, the closest things we have now, as Benkler notes, are things like the Public Library of Science (open-source publishing) and the genomics tool Ensembl. Moving over to physics and math, you have the ArXiv preprint server, which is further down this path than anything that exists in this end of the world.

Note, of course, that these are all academic projects. Benkler points out that university research departments, for all the fuss about Bayh-Dole patenting, still get the huge majority of their money from granting agencies. He proposes, then, that universities adopt some sort of Open Research License for their technologies, which would let a university use and sublicense them (with no exclusivity) for research and education. (Commercial use would be another thing entirely.) This would take us back, in a way, to the environment of the "research exemption" that was widely thought to be part of patent law until recently (a subject that I keep intending to write about, but am always turned away from by pounding headaches.)

As Benkler correctly notes, though, this would mean that universities would lose their chance for the big payoff should they discover some sort of key research tool. A good example of this would be the Cohen/Boyer recombinant DNA patent, licensed out 467 times by Stanford for hundreds of millions of dollars. And an example of a failed attempt to go for the golden gusto would be the University of Rochester's reach for a chunk of the revenues from COX-2 inhibitors, despite never having made one. (That's a slightly unfair summary of the case, I know, but not as far from reality as Rochester would wish it to be.)

That's another one I should talk about in detail some time, because the decision didn't rule out future claims of that sort - it just said that you have to be slicker about it than the University of Rochester was. As long as there's a chance to hit the winning patent lottery ticket, it's going to be hard to persuade universities to forgo their chance at it. Benkler's take is that the offsetting gains for universities, under the Open Research License, would be "reduced research impediments and improved public perception of universities as public interest organizations, not private businesses." To compensate them for the loss of the chance at the big payoff, he suggests "minor increases in public funding of university science."

Laudable. But will that really do it? As far as I can tell, most universities are pretty convinced already that they're just about the finest public interest organizations going. I'm not sure that much need for good publicity, rightly or not. And Benkler's right that a relatively small increase in funding would give universities, on average, what they would make, on average, from chasing patent licensing money. But show me a university that's willing to admit that it's just "average."

The problem gets even tougher as you get to the research departments that really aren'taverage, because they're simultaneously the ones with technologies that would be most useful to the broader research community and the ones with the best chance of hitting on something big. I'll be surprised - pleasantly, but still very surprised - if the big heavy research lifters of the world agree to any such thing.

Comments (4) + TrackBacks (0) | Category: Academia (vs. Industry) | Patents and IP

August 24, 2004

Living by the IP Sword

Email This Entry

Posted by Derek

Back when I was in graduate school, we didn't have these here fancy automated literature searches here. So I had to find out in the old-fashioned way that the molecule I was working on had just been synthesized (first) by someone else: by picking up the library's latest issue of the Journal of the American Chemical Society (in its old grey-covered era) and coming across it in the table of contents.

Not a fun moment. I gave a muffled shout and started paging frantically to the article. And yep, there was another research group's total synthesis all right. The only consolation was that they weren't doing it the way that I was, and my route was better. Allegedly. After this, my attitude was "The world does not need another synthesis of rosaramicin. But I do."

Now that I'm in industry, I don't fear the open literature so much. I fear the patent literature. Whenever a drug company starts serious work on a chemical series, it puts out a sieve of automated searches for the core structure and all its close relatives. If you're ripping off someone else's known structure, which we all do from time to time, then you really spend a lot of time looking over your shoulder. These searches usually run once a week, and you want to see a comforting "0 results" come up, which tells you that you're still in the clear. Sometimes you're in the middle of the road, though, with the high-beam headlights bearing down on you. Most people with pharma experience have had a chemical series (or a whole project) yanked out from under them because it turns out that someone else was already working on it.

I can think of at least one case where it turns out that my group and a group at another company had stumbled across the exact same chemical series, and we were both working away at it at exactly the same time. Neither of us knew it at the time, of course, but when the patent applications published, everything became clear. We'd filed our applications with a few weeks of each other. And neither project was taking off from a known compound in the field; we'd apparently each discovered our leads through random screening. Makes you wonder about how much overlap there is between company screening collections. . .

Comments (1) + TrackBacks (0) | Category: Patents and IP

I'll Have the Price They're Having

Email This Entry

Posted by Derek

Thanks to Arnold Kling, I found this piece on the economics of the drug industry. It comes from the remarks at a recent industry conference, and it's worth reading (even if it does make an approving reference to Marcia Angell near the end) - an excerpt:

If the prosperous flow of innovations was to be sustained (he said), either the industry would have to find a way to dramatically alter its cost structure, or else "we are going to have to figure out collectively some way across political parties and countries to construct and maintain a structure of global price differentials."
With the promise that changes in the cost structure would be addressed in another meeting later in the fall, attention turned to various schemes for differential pricing.

Berndt skimmed quickly over the traditional argument for differential pricing. Many industries had high fixed costs and relatively low marginal costs, he said -- electricity, telecommunications, software, database services, movies, and so forth.

But none was the same class as pharmaceuticals, where the difference in incremental cost between the first tablet of a new medicine and the second is on the order of $800 million for an average medicine --$800 million to "get the science right" and make certain that the treatment works in some degree, 25 cents to make the second copy, and the third and as many more tablets as can be sold.

Not being an economist, I'd never thought of it in quite that way. It's a familiar illustration of the problem of software piracy, though, or file-sharing of copyrighted work. We don't have as much of a piracy problem in pharmaceuticals (although it's certainly there), since it's harder for a third party to run off further copies of our pills.

The focus on differential pricing is justified. That's what the whole drug reimportation debate comes down to, and it's the corner my industry has painted itself into. This meeting seems to have mostly tried to find ways to maintain the existing system, which is at least better than suddenly yanking it down. That is, it's better for all of us who would be suddenly pitched onto the street, and it's better for our customers, who in a few years might wonder why there haven't been any new drugs to treat their diseases for a while.

But I'll really be interested in the next conference mentioned above. I think that eventually we're going to have to move to a new pricing structure, and I don't know what it's going to look like. The article itself has a suggestion, which I'll address separately - Arnold Kling has a follow-up on it here. Whatever it is, we're going to need some time to get ready for it.

Comments (5) + TrackBacks (0) | Category: Drug Prices

August 19, 2004

Empty Shelves

Email This Entry

Posted by Derek

Yesterday I was writing about a proposal to encourage new antibiotics, partly by not putting so much effort into discouraging the use of the current ones. The economist who's advocating this, Paul Rubin, also would like for the FDA to consider accelerating the approval process (and at the very least, not making it even harder than for other classes of drugs.)

I like the sound of that part, as you'd guess, though always with a nervous look up in the sky for the circling silhouettes of the product-liability attorneys, whose razory talons were the subject of yesterday's post. But there's another problem with just stepping out of the way of the new antibiotics: there aren't very many coming through.

Many companies have been scaling back their anti-infectives research over the last few years, and I don't think that the regulatory environment is the main reason. The whole therapeutic area is rather target-poor. We've exploited the obvious vulnerabilities of bacteria, thus the -cillins and -sporins, the fluoroquinolones and the erythromycins. Extended searching hasn't turned up many more modes of attack, at least not of that quality. The most recent new class of antibiotics that I can think of are the oxazolidinones, but resistance to the first one is already showing up.

Here's a rundown of newer antibiotics (the situation hasn't changed much since this appeared.) Note that most of the things on this list have been known for a long time and are being re-examined, or are improved versions of things we already have.

I know where Rubin is coming from - drug resistance wouldn't be as much of a problem if we had a steadier stream of new antibiotics with new mechanisms of action. But I don't know if we can hold up our end. Providing incentives by loosening regulatory requirements could persuade some companies to get into the hunt (or stay in it), but the hunt itself is the real limiting factor. It's not a good situation, and I think that we in the industry are kind of at a loss as to what to do about it. . .

Comments (6) + TrackBacks (0) | Category: Infectious Diseases

August 18, 2004

Resistance to Resistance

Email This Entry

Posted by Derek

Forbes has an article on some recent work of Paul Rubin, an economist at Emory. He's looking at the situation in approvals of new antibiotic drugs, which isn't an encouraging sight.

He's of the opinion that too much government effort has gone into cutting overuse of the existing drugs (to try to slow down the development of resistant bacteria) and not enough has been done to make new drugs easier to bring to market. The use of antibiotics in general is being discouraged, and at the same time the FDA has made the regulatory environment for new submissions tougher. A quote:

". . .the FDA policy of requiring additional testing for antibiotics is a fairly bizarre policy and makes no sense. . .A much more cost-effective alternative would be to approve the drug in the normal manner (or even provide an accelerated approval) and spend additional resources surveillance."

He's probably right about this, but (and here's the usual problem) it makes sense only if you ignore the tort lawyers. If your new antibiotic goes out and causes trouble in some subset of the patient population, it's no use telling the attorneys that, hey, the FDA approved it. They're not going to get money out of the FDA; they're going to get it out of you. Nope, it was your willful, stupid, perverse, dare I say evil negligence that led to this completely avoidable tragedy, and. . .aargh, you can write the rest as well as I can.

That's the thing: the FDA requires that we show safety and efficacy. We can prove the presence of efficacy, but safety is merely the absence of harm. No one can prove a trial lawyer's definition of safety. A clinical trial can tell us that in the population that participated in the trial, X adverse events took place. Whether X is a greater or lesser number than we'd expect in the general population is a question that can be answered statistically, but whether our drug caused those X events isn't a question that can usually be answered at all.

In such cases, our best chance is to see if the affected patients had something in common, or if the problem increased in proportion to the dose. Often enough, neither is the case - does that make the compound safe, or not? Even if there weren't any signs during the trials, what will happen when our drugs hit the orders-of-magnitude larger population of paying customers? We don't know. We can rule out what our clinical trials were powerful enough to see, but we will never see the one-in-fifty-thousand kinds of trouble. Not until the lawsuits start flying.

There's yet another problem with Rubin's argument, scientific rather than regulatory, which I'll address tomorrow. . .

Comments (6) + TrackBacks (0) | Category: Infectious Diseases

August 17, 2004

Kinases and Their Komplications

Email This Entry

Posted by Derek

I'm going to take off from another comment, this one from Ron, who asks (in reference to the post two days ago): "would it not be fair to say that cellular biochemistry gets even more complicated the more we learn about it?

It would indeed be fair. I think that as a scientific field matures it goes through several stages. Brute-force collection of facts and observations comes early on, as you'd figure. Then the theorizing starts, with better and better theories being honed by more targeted experiments. This phase can be mighty lengthy, depending on the depth of the field and the number of outstanding problems it contains. A zillion inconsistent semi-trivialities can take a long time to sort out (think of the mathematical proof of the Four-Color Theorem), as can a smaller number of profound headscratchers (like, say, a reconciliation of quantum mechanics with relativity as they deal with gravity.)

If the general principles discovered are powerful enough, things can get simpler to understand. Think of the host of problems that early 20th-century physics had, many of which resolved themselves as applications of quantum mechanics. Earlier, chemistry went through something similar earlier, on a smaller scale, with the adoption of the stereochemical principles of van't Hoff. Suddenly, what seemed to be several separate problems turned out to be facets of one explanation: that atoms had regular three-dimensional patterns of bonding to other atoms. (If that sounds too obvious for such emphasis, keep in mind that this notion was fiercely ridiculed at resisted at the time.)

Cell biology is up to its pith helmet in hypotheses, and is nowhere near out of the swamps of fact collection. As in all molecular biology, the sheer number of different systems is making for a real fiesta. Your average cell is a morass of interlocking positive and negative feedback loops, many of which only show up fleetingly, under certain conditions, and in very defined locations. Some general principles have been established, but the number of things that have to be dealt with is still increasing, and I'm not sure when it's going to level out.

For example, the other day a group at Sugen (now Pfizer) published a paper establishing just how many genes there are in mice that code for protein kinase enzymes. Through adding phosphoryl groups, these enzymes are extremely important actors in the activation, transport, and modulation of the activities of thousands upon thousands of other proteins, and it turns out that there are exactly 540 of them. (Doubtless there are some variations as they get turned into proteins, but that's how many genes there are.) And that's that.

Now, that earlier discovery of protein phosphorylation as a signaling mechanism was a huge advance, and it has been appropriately rewarded. And knowing just how many different kinase enzymes there are is a step forward, too. But figuring out all the proteins they interact with, and when, and where, and what happens when they do - well, that's first cousin to hard work.

Comments (0) + TrackBacks (0) | Category: Biological News | In Silico

August 16, 2004

Clay Lies Still, But Blood's A Rover

Email This Entry

Posted by Derek

When a drug makes it into the bloodstream (which is no sure thing, on my side of the business), it doesn't just float around by itself. Blood itself is full of all kinds of stuff, and there are many things in it that can interact with drug molecules.

For one thing, compounds can actually wander in and out of red blood cells. This usually isn't a big deal, but once in a while a compound will find a binding site in there, which had flippin' well better not be on the hemoglobin protein. Depending on the on- and off-rates, this can either add a welcome time-release feature to the dosing or it can be a real pain. I haven't heard as much about interactions with white cells, but since they're a much smaller fraction of the total blood it's not something we'd be likely to notice.

More commonly, drugs stick to some sort of plasma protein. The most common one is serum albumin, and another big player is alpha-1 acid glycoprotein, or AGP. Albumin's found in large amounts and has several distinct binding sites. Acidic drugs are well known to hold on to it. As far as I'm aware, no one's absolutely sure what it's there for, but it must be pretty important. The multiple binding sites make it seem like could be some sort of concentration buffer for several different substances, but which ones? (I've never heard of an albumin knockout mouse - I assume that it would be lethal.)

The same comments about good and bad effects apply. A lot of effort has gone into schemes to predict plasma protein behavior, with success that I can charitably describe as "limited."The real test is to expose your compounds to fresh blood and see if you can get them back out. Some degree of protein binding is welcome, and you can go on up to 99% without seeing any odd effects. But at 99-and-some-nines you can start to assume that something is wrong, and that the interaction is too tight for everyone's good.

But when you're doing your blood assay, you had better make sure to try it with all the species that you're going to be dosing in. There's a kinase inhibitor from a few years back called UCN-01 that provides a cautionary tale. It was dosed up to high levels in rats and dogs, wasn't bad, and passed its toxicology tests, and went into human trials. They started out at one-tenth the maximum tolerated rat dose in the Phase I volunteers, which should be a good margin. But when they got the blood samples worked up, everyone just about fell out of their chairs.

There was at least ten times the amount of drug circulating around than they'd expected, because it was all stuck to AGP and it just wasn't coming off. A single dose of the drug had a half-life in humans of about 45 days, which must be some sort of record. Well, you might think, what's the problem. A once-a-month drug, right? But it doesn't work like that: the compound was so tightly bound that it would never reach the tumor cells that it was supposed to treat. All it was doing was just riding around in the blood. And the clinical program really dodged one from the safety perspective, too, because as they escalated the dose they would have eventually saturated all the binding that the AGP had to offer. Then the next higher dose would have dumped a huge overage of free drug into the blood, and all at once. Not what you're looking for.

The compound is still being investigated, but it's having a rough time of it. It's been in numerous Phase I trials, with all sorts of dosing schedules. A look through the literature shows that the compound is mainly being used as a tool in cell assays, where there's no human AGP to complicate things. With so many kinase inhibitors out there being developed, it's going to be hard to find a place for one with such weird behavior.

Comments (5) + TrackBacks (0) | Category: Cancer | Pharmacokinetics

August 15, 2004

FullCell 1.0?

Email This Entry

Posted by Derek

Reader Maynard Handley, in a comment to the most recent post below, asks:

". . .how far are we from doing at least a substantial fraction of this stuff in silico? I've read that some amazing computational models of full cells now exist, but even so, this author didn't expect that drugs could be usefully tested computationally until 2030 which seems awfully far out."

I don't know the article that he's referring to, but "awfully far out" pretty much sums up my reaction, too. I just don't think we have enough data to do any real whole-cell modeling yet. It's coming, and perhaps for a few very well-worked-out subsystems we could do it now, but I'm sceptical even of that.

A few days reading the current cell biology literature will illustrate the problem. All sorts of proteins are found, all the time, to be players in systems that no one suspected them of being involved it. Kinases are found to phosphorylate things that no one had seen them do before, lipases are found to accept substrates that no one had realized they could. A given signaling peptide is gradually found to have more uses than a Swiss army knife. We don't even really understand the basic mechanisms (like G-protein-coupled receptor signaling) enough to model them to any useful level.

The process of finding these things out doesn't seem like it's going to end soon, and there have to be many fundamental surprises waiting for us. Modeling the system in their absence is going to be risky - interesting, no doubt, and potentially lucrative (if you find a useful approximation), but risky. It's going to take some pretty convincing stuff for the drug industry to ever depend on it.

And all of this applies to single cells, which come in, naturally, an uncounted variety, each with its own peculiarities, the great majority of which we don't have any clue about. And then you come to the interactions between cells, which are highly significant and (in many ways) a closed book to us at present. If we knew more about these things, we'd be able, for example, to culture human cell lines that acted just like their primary tissue progenitors - but we can't do it, not yet.

No, although I have every belief that these things are susceptible to modeling, I just don't think we'll see it (on a useful scale) any time soon. Over the next twenty years, I'd expect to have some of the easier-to-handle cellular subsystems worked out to give robust in silico treatments, but a whole cell? And all the types of whole cells? Much longer than that. More than that I can't guess.

Comments (3) + TrackBacks (0) | Category: In Silico

August 12, 2004

A Week in the Life

Email This Entry

Posted by Derek

I have a weird job. I feel safe in saying that, not least because I'm supposed to discover a drug that can be sold to sick customers, and I haven't even come close to doing that in fifteen years of work. (No, it's not just me.) Another thing that makes me sure that my line of work is abnormal is that nothing I've ever worked on has ever quite gone the way I thought it was going to.

For instance: we make a compound, and it works in the first assay - it binds tightly to the protein target. Then it works in living cells, so we make more of it and we put it into mice. And it works there - not wonderfully, not really good enough yet, but enough to show that we're on the right track. So we go back and start changing the structure of the compound and making new analogs, which is the whole point of a medicinal chemist's job. You try to find something better.

Change a group over on this side of the molecule: dead. OK, be that way, we'll change it the other way around. . .well, the activity is back to where we started, but no better. Hmm - try this part over on the other side of the ring. Hey, it works! Ten times as active as the first compound! Let's put it into cells! And. . .nothing. Dead. Corn starch would be more active. No way to find out why. Back to the drawing board.

OK, let's try doing this change at the same time as we switch this part over here (keep in mind, this stuff doesn't happen instantaneously, these are days or weeks spent the lab for each of these bolts of inspiration). . .hey! Back to great activity! Time for the cell assay again. . .this time it works. Make more of it, put it into that mouse assay, and - nothing. Nothing at all. Exactly the same as giving them club soda. Now what?

I'm not exaggerating. I fully expect some of my med-chem colleagues from around the industry to back me up in the comments section below: this is what our days and weeks look like around here. This is why I roll my eyes when I come across moonbat conspiracy theories about how the drug companies have all these secret cures that we're sitting on, see. . .hah. Secret cures, my colon. Some days we go home unsure if we're capable of boiling an egg.

Comments (6) + TrackBacks (0) | Category: Life in the Drug Labs

August 10, 2004


Email This Entry

Posted by Derek

I need to take a moment to remember two extraordinary scientists: Francis Crick and Thomas Gold. Both distinguished themselves by being willing not to care about what other people thought of them and their work, which is a useful spice for the stew as long as you don't add the whole jar.

Of the two, Gold was the harder for his colleagues to take. He worked in a variety of fields in a way that is hardly ever seen in modern science. Along the way, he had some spectacular misfires, but you have to be doing spectacular work to have those at all. And his successes (in things as diverse as pulsars and the bones of the inner ear) were impossible to deny. He may yet be proven right about his final provocation, the idea that geological hydrocarbons are, for the most part, just that: geological and not biogenic. He expanded this idea to the propose the "deep, hot biosphere" which both generates methane and adds biogenic signatures to inorganic petroleum, and that part, at least, is looking more correct every year.

Cosmology, physiology, astronomy, geology - I don't think we're going to see his like again. To be honest, there are many people who will hope we don't. Gold was not inhibited about pointing out the failings, as he saw them, of fellow researchers, and there were many who saved up ammunition to pay him back in kind. I don't think science could function well with a population made out exclusively of Tommy Golds. But it would function even more poorly without any at all.

Francis Crick, the more famous of the two, probably seemed to the public to have dropped out of sight for the last fifty years after the DNA discovery. But molecular biologists know how important he was in the years after that first proposal, helping to work out the RNA code and other fundamental issues. Later on he turned to even harder areas, such as the physiological nature of consciousness and memory. No one person is going to solve those, and Crick didn't. But he took some fine swings at them, and he'll be missed.

Comments (1) + TrackBacks (0) | Category: General Scientific News

Solid Citizens

Email This Entry

Posted by Derek

I took the day off yesterday, but this morning it's back to the lab. I didn't leave any chemistry going in my hood, so there should be no surprises there, unless one of the things I've left sitting around took the opportunity to crystallize.

Just about every drug candidate we make should have one or more crystalline forms - we don't usually bother testing molecules that are too small to be solids at room temperature. But that doesn't mean that they all crystallize easily. Some things really need coaxing. If there's the least bit of water or solvent left in them, they sit around as thick oils, gums, or syrups.

Some of them will decide, after a few weeks, to finally start growing some crystals. Perhaps some of the volatile impurities finally evaporate, or possibly the molecules just finally banged into each other the right way and decided to set up housekeeping in the solid state. And you can't rule out the beneficial effects of lab dust for starting things off, either. There are all kinds of legends about labs in the old days that cleaned things up so well that their stock-in-trade compounds wouldn't crystallize any more.

I had a compound from my first year of graduate school, a light yellow syrup that I kept around my whole time there, that finally decided to start growing thick faceted crystalline plates in my (and its) fourth year. This just might have had something to do with most the compounds I made being carbohydrate derivatives, which have a fairly accurate reputation for not crystallized except at gunpoint.

The thicker the syrup, the slower these things happen, as you'd figure. Real crystallizations out of solution, by contrast, can be pretty dramatic - a sudden whoomphing snowstorm in the flask. Do it too fast, though, and impurities will be entrained and come down with your solid. The ideal is a noticable, steady growth, the sort of thing you leave overnight and come back for.

For the most part, we don't care too much during the synthesis if things are solids or not. It makes compounds easier to transfer and handle if they are, but you can always dissolve up a goo or sludge in some solvent. But for final drug candidates, a reproducible crystalline form is a good thing to have. It's close to crucil if you dose as a suspension. You'll get into trouble if you keep dosing poorly characterized solids, which might contain various amounts of various crystal forms along with amorphous material, all of which dissolve at different rates.

Comments (2) + TrackBacks (0) | Category: Life in the Drug Labs

August 9, 2004

Fast, Cheap, and Sometimes Even Good

Email This Entry

Posted by Derek

The New York Times has a good article this week on a trend in clinical trials that's been developing for several years - small intensive trials in humans, run before giving the go-ahead for the real thing.

It makes a lot of sense, but only when you can use it to ask (and answer) the right questions. That's where technologies like functional NMR imaging or PET scans come in, because they allow you access to in vivo data that's otherwise unobtainable. Take, for example, the studies mentioned in the Times article, where they look at glucose uptake in a solid tumor. That's a reasonable proxy for its metabolic activity, as you'd guess, and it'll give you a quick read on whether your targeted cytotoxic compound is having the effect you want.

What you'd do, normally, is dose the compound for days or weeks, then use NMR or another imaging technique to see if the tumor has changed size. That's clearly a more convincing answer, but it takes a more convincing amount of time and money to get it. And if your compound isn't having an effect on a fast marker like the tumor's metabolic rate, it's probably not going to have any effect after you dose it for two months, either. You're better off trying something else.

But if your new cancer therapy is, say, a compound that interferes with cell division, then you're not going to have that clear an answer through that glucose uptake technique. Same problem if the cancer you're treating is a more diffuse one like leukemia, because there's not such a clear tissue to image. (There are other approaches to each of those problems, naturally, but I just wanted to emphasize that each clinical trial is its own set of new problems, even inside the same general therapeutic area.)

And even when you get to the traditional large-scale trials, there's a huge need for surrogate markers that can show progress against slow-moving diseases. Glycosylated hemoglobin as a measure of efficacy in diabetes is a good validated example. It still takes quite a while to establish (weeks or months of dosing), but that's like lightning compared to the progress of diabetes complications themselves. You can do a quick assay in this field - the oral glucose tolerance test - but the improvement in that assay isn't so quick to come on.

The CNS diseases are a real clinical challenge, which is why their trials are so brutally expensive. There are hardly any markers at all for most of them. Everyone would love to have a short-term noninvasive readout for Alzheimer's, but despite years of effort, no one has quite made it. (And that's despite the definition of "short-term" in Alzheimer's is rather permissive.) Similarly, it would be good to be able to get a faster readout on depression, whose therapies are notorious slow starters.

There's a bigger problem, though, looming over some of the generally accepted markers - what effect do they really have on long-term mortality and morbidity? Glycosylated hemoglobin has been pretty well correlated in diabetes over the long term, so that one's pretty safe. But the question is worth asking, for example, about HDL and LDL levels. Yes, things do line up well, up to a point. But does long-term administration of statin drugs, say, help as much as we'd like to hope it does over, say, twenty years? The jury's still out on that one.

Comments (5) + TrackBacks (0) | Category: Clinical Trials

August 5, 2004

The State of the State of the Art

Email This Entry

Posted by Derek

This fall will mark my fifteenth year in the drug industry. Looking back at what things were like in late 1989, there's one thing that I find striking above all the others: that very little has changed.

Fifteen years is a pretty long time in the sciences. In a field like molecular biology it's a ridiculous length of time, but their clocks will slow down on them, too: the previous span (from 1974 to 1989) was a much bigger leap for them than the last fifteen years have been. In a mature field like chemistry we don't have such dramatic interludes, but you do see the changes piling up.

But when I started doing drug discovery, it worked like this: you got a chemical lead by random screening, and a bunch of chemists started in on it, changing the structure around to see if they could improve its activity in a set of in vitro assays. The better compounds went into a rodent model of efficacy, and you checked the blood levels of compound to get an idea of its pharmacokinetics. Once you met all the criteria you'd set, you started high-dose toxicology on selected compound in more rodents, then larger animals. And if things held up, you declared a compound to be the winner, and passed it on to the clinical development team (the scale-up chemists had already been having a look at it, to make sure that they could supply enough for longer tox and human trials.)

Sound familiar? That's exactly how we do it now, most of the time. Oh, the compound you started with might have come from a combinatorial chemistry library this time (although odds are that it didn't!) And you might have some help from the molecular modeling folks along the way (but there are plenty of projects where they can't help, and plenty where they only think they can - no offense, guys.) You'll probably have a more assembly-line approach to getting some quick-and-dirty animal dosing for blood levels, too.

But these are minor changes. Are we ever going to do things really differently? Routinely start with an in silico lead compound, say? Build our compounds by mix-and-match fragment assembly instead? Find a way to predict pharmacokinetics, at least a little bit, so we don't have to run everything through mice? Get some serious clues about toxicology, so we can get off the mouse-rat-dog treadmill on the way to human trials?

The cockpit looks pretty much the same as it has for years. All we have are fancier propellers and slightly more responsive rudders. No one has invented the jet engine yet, and I wonder when someone will.

Comments (4) + TrackBacks (0) | Category: Drug Industry History

August 4, 2004

Things I Won't Work With: A Nasty Condensed Gas

Email This Entry

Posted by Derek

If you cool things down enough, you can turn almost anything into a liquid (or into a solid, if you're really insane about it.) Chemists use liquid ammonia fairly often, for example, though it's been some years now since I've needed any. People outside the field think of the aqueous solution of ammonia gas (household ammonia) when you say "liquid ammonia", but I'm talking about the pure stuff. Cool the gas down below about -33 C, and you'll condense it out to a clear liquid that's sort of like a thinner version of water.

It's easy enough to do, with an ammonia tank and a condenser full of dry ice. But once, over twenty years ago, I had a chance to see someone use one of those rigs to condense something a bit more exotic: pure hydrogen cyanide. That's another one that people confuse with the aqueous solution. But pure HCN has a fairly high boiling point, for such a small molecule, and condensing out is no problem - as long as you have more nerve than you have sense.

The fellow doing it was down the hall from me in graduate school, and he was doing an obscure reaction which forms a geminal dinitrile, which themselves are rather obscure compounds. (That's probably because this bug-eyed route is the best way to make 'em.) He was dressed in full suit and respirator gear, for which he'd had to get trained. Everyone else had cleared out of the lab, but someone was watching him at all times from the hallway, just in case.

I thought to myself, "When am I going to get the chance to see pure liquid HCN again?", and went down to see, ready to bail out if anything started going wrong. It looked just like ammonia, clear drops rolling down the cold condenser and dripping into the round-bottom flask below. But there was enough HCN in there to kill off the lot of us, if (im)properly handled.

I've worked with plenty of cyanide since then, and even plenty of reactions that have produced small whiffs of HCN vapor. (As I think I've mentioned, it doesn't smell as much like almonds as it's said to, in my opinion.) But I doubt very much if I've worked with enough of it to match the amount that I saw in that flask, that day - there must have been a couple of moles of it in there. A lifetime supply that was, in many sense of the word. . .

Comments (8) + TrackBacks (0) | Category: Things I Won't Work With

August 3, 2004

Silent Mutations and Noisy Ones

Email This Entry

Posted by Derek

One of the comments in my post on animal models prompts me to write a bit more on mutations. I stated that the mutant animal models that we use all have something wrong with them, but I didn't mean to imply that all mutations will do that. There are plenty of so-called "silent" mutations out there, single amino-acid changes in large proteins that basically make no difference. If you switch, say, valine for isoleucine, most of the time it's not going to hurt much (or help much.) (The reason our mutant animals have something wrong with them is that we're trying to mimic a diseased human; if they weren't defective, we wouldn't be interested.)

Billions of years of evolution have honed things down pretty well. If a protein gets altered, it's a lot easier to have a sudden loss of function than it is to have a sudden gain. It's like popping your hood and throwing rocks at your car engine - you have a better chance of damaging the thing than you have of whacking it in a way that increases your gas mileage.

I wrote about a particularly vivid example of this a couple of years ago on my old Lagniappe site. (That material seems to be succumbing to bit-rot when I try to pull it out via Google, so I'm going to rescue some of it every so often.) Here's a slightly reworked version of what I had to say about a famous Alzheimer's mutation:

One of the things that gives me the willies about biochemistry is the nonlinearity. If anyone were to ever come up with a set of equations to model all the ins and outs ofa living organism, there would be all these terms - way out in the boonies of the expression - with things to the eighth and tenth powers in them.

Of course, the coefficients in front of those terms would usually be zero, or close to it, so you'd hardly know they were out there. But if anything tips over and gives a little weight to that part of the equation. . .suddenly something unexpected wakes up, and a buried biological effect comes roaring to life out of nowhere.

Here's the real-world example that got me thinking in that direction. When I used to work on Alzheimer's disease, I first learned the canonical Amyloid Hypothesis of the disease. Briefly put, at autopsy, the brains of Alzheimer's patients always show plaques of precipitated protein, surrounded by dying neurons. It's always the same protein, a 42-amino-acid number called beta-amyloid. A good deal of work went into finding out where it came from, namely, from a much larger protein (751 amino acids) called APP. That stands for "amyloid precursor protein," in case you thought that acronym was going to tell you something useful

The ever-tempting hypothesis has been that an abnormal accumulation of beta-amyloid is the cause of Alzheimer's. This isn't the time to get into the competing hypotheses, but amyloid has always led the pack, notwithstanding a vocal group of detractors who've claimed that Alzheimer's gives you amyloid deposits, not the other way around. (Note from 2004: I wrote recently about developments in the amyloid field here and here.)

So what's APP, and what's it good for? It took all of the 1990s to answer that one, and the answers are still coming in. It's found all over the place, and seems to have a role in cellular (and nuclear) signaling. Normally, it's cleaved to give smaller protein fragments other than the 42-mer that causes all the trouble.

One of the stronger arguments for amyloid as an Alzheimer's cause came from the so-called "Dutch mutation," which is what got me to thinking. As was worked out in 1990, there's a family in Holland with a slightly different version of APP. One of the 751 amino acids is changed - where the rest of the world has glutamic acid, they have glutamine - almost the same size and shape, but lacking the acidic side chain.

So. . .there's one amino acid out of 751 that's been altered. And that's in one protein out of. . .how many? A few hundred thousand seems like the right order of magnitude for the proteome, maybe more. And what happens if you kick over that particular grain of sand on the beach? Well, what happens is, you die - with rampaging early-onset Alzheimer's (and a high likelihood of cerebral hemorrhage) before you're well into your 40s.

As it happens, that amino acid is right in the section of the protein that becomes beta-amyloid. Altering it makes it much easier for proteases to come and break the amide bond in the protein backbone, so you start accumulating beta-amyloid plaques early. Much too early. Bad luck - the change of just a few atoms - snowballs into metabolic disaster. Since then, many other mutations have been found in APP, and many of them are bad news for similar reasons.

But it's not like every amino acid substitution in some random protein causes death, of course. There are any number of silent mutations, and plenty that are relatively benign. Most of the time, those high-exponent terms out there in the mathematics sleep on undisturbed. And it's better that way.

Comments (3) + TrackBacks (0) | Category: Alzheimer's Disease

August 2, 2004

Research, The Right Way

Email This Entry

Posted by Derek

For today, instead of reading something over here, I'd like to send everyone over to Australian physicist Michael Nielsen. He's been writing a manifesto about how to do research, and here's the finished product. (Thanks to Chad Orzel for the link.)

I find his prespective to be very accurate indeed. Readers may recognize some themes that I've sounded over here from time to time. I'll be add my own comments in a future post or two.

Comments (1) + TrackBacks (0) | Category: Who Discovers and Why

August 1, 2004

Furry Judges, With Tails

Email This Entry

Posted by Derek

The phrase "guinea pig" entered the language a long time ago as slang for "test animal", but I've yet to make a compound that's crossed a guinea pig's lips. Guinea pigs are still used for a few special applications, but since the beginning of my career, I've been surrounded (metaphorically!) by rats and mice.

Of the two, I prefer the mice. That's probably because they're smaller, and need correspondingly less effort from people like me to make enough drugs to dose them. The animal-handling folks prefer them for similar reasons: rats are more ornery, and they can fetch you a pretty useful bite if they're in the mood. When I was working in Alzheimer's disease, we had a small group of elderly rats that we were checking for memory problems. If that makes you think of rat-sized rocking chairs, think again. These were big ugly customers, feisty, wily critters that knew all the tricks and were no fun to deal with. Give me mice any day.

Of course, there are mice and there are mice. "Wild-type" mice are pretty hearty, but we don't use rodents captured out in the meadow. They're too variable, not to mention being loaded down with all sorts of interesting diseases. Every rodent we use in the drug industry comes from one of the big supply houses. Even our wild-types are a particular strain, identified with a catchy moniker like "K57 Black Swiss."

You're in good shape if you can use regular animals for your drug efficacy tests, but we often work on diseases which have no good rodent equivalents. People in diabetes projects, for example, often use mutant mice such as the db/db and ob/ob strains, which are genetically predisposed to put on weight. Eventually they can show some (but not all) of the signs of Type II diabetes. They can get pretty hefty - you'd better plan on making more compound if you're going to be testing things in those guys. Meanwhile, cancer researchers go through huge number of the so-called nude mice, a nearly hairless mutant variety with a compromised immune system. You've got to know what you're doing when you have a big group of those guys, because you can imagine how a contagious rodent disease could tear through them.

All the mutant animal lines are damaged in one form or another, since they're supposed to serve as a model of a disease. (Actually, most mutants in any animal population are damaged, since in a living system it's a lot easier to make a random change for the worse than it is to make one for the better.) They're just not as robust as the wild types. They need special handling, and they can't tolerate all the methods of compound dosing that a normal animal can. In some cases, you're restricted to the mildest, tamest vehicle solutions. (You know, the ones you can't get any of your compounds to go into.)

And there's always that nagging doubt about how valid your animal models might be. Some research areas have worked out a pretty good correlation between what works in people and what works in mice, but many of us are still stumbling around. The more innovative your work, the less of an idea you have about whether you're wasting your time. 'Twas ever thus.

Comments (15) + TrackBacks (0) | Category: Animal Testing