About this Author
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
September 30, 2004
The talk at every pharmaceutical company today was Merck's sudden withdrawal of their COX-2 inhibitor Vioxx. Merck has been having an awful time for the last year or two, and this really throws a burning tire on top of the whole heap.
They were running a study to see if Vioxx would help prevent the formation of colon polyps - evidence has been accumulating that COX-2 inhibition would be helpful in colon cancer, and Merck was going to put the idea to a rigorous test. Halfway through the three-year trial, though, things have come to an ugly halt. Not only was there no colorectal effect (at least, none so far), but the treatment group showed roughly twice the rate of serious cardiovascular side effects such as heart attacks and stroke. These doubts had followed Vioxx for several years now, after a JAMA-published analysis which seemed to suggest cardiovascular complications. But Merck contended that this earlier study was controlled against a group taking a cardioprotective drug, and therefore not sufficient evidence. That's not the case any more.
So what's this mean? Well, in the near term, Pfizer is going to rake it in with Celebrex and its successor Bextra. And Novartis will have a more open field to introduce their coming COX-2 drug Prexige. But are these problems confined to Vioxx, or is it a COX-2 mechanism effect that's going to keep showing up? As far as I know, these problems haven't been noted with Celebrex, but it may be incumbent on Pfizer to generate new data to make sure. If the drug is clean, then Pfizer gets my vote as the luckiest drug company I have ever seen, between the unexpected benefits of Lipitor and the unintentional safety of Celebrex.
Meanwhile, Merck is going to face a horrible tsunami of lawsuits. It's 10:20 EST as I write this, and when I search Google for the word "Vioxx", the first two sponsored links on the right side of the page are from tort lawyers already trolling for clients. Lawsuit-centered web domains are already active, and I'm sure that the radio ads will be on the air tomorrow. I hate to say it, but I don't see how Merck makes it through this without firing people at some point. It's a damn shame - even Merck's fiercest competitors respect their research prowess, and I hate to see the company damaged.
And in the long term? Matthew Herper has it right over at Forbes:
"In some sense, every medicine is a ticking time bomb, and existing studies may not be enough to know what is safe and what isn't. The drug development business was already risky and expensive. But it just got even worse."
Just what we needed. Man, sometimes I think I should answered that ad back in the 1980s and learned to drive the big rigs for fun and profit.
+ TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Toxicology
September 29, 2004
After mentioning my cheerful outlook on drug reimportation, I should bring up the interesting case of a Pfizer executive, Peter Rost, who also thinks that the drug safety argument is a loser and is willing to say so. (But he's saying it because he thinks that Canadian reimportation would be a really great idea. This is, to put it gently, a most unusual position for a pharmaceutical executive to take.) Rost has been all over the news and in front of Congress, telling everyone with a microphone what he thinks.
He lays it right out about the ridiculous drug safety tactics, saying that he's "never, not once, heard the drug industry, regulatory agencies, the government or anyone express any concern related to safety" and that ". . .companies are testifying that imported drugs are unsafe. Nothing could be further from the truth." Hey, open up! It's good for the soul!
How is Pfizer taking this? Not too well. One of their other executives, one Chuck Hardwick, sent a letter to members of Congress saying "Dr. Rost has no qualifications to speak on importation, no responsibilities in this area at Pfizer, no knowledge of the information and analysis Pfizer has provided to the government on this issue and no substantive grasp of how importation may impact the safety of this nation's drug supply." Safety first, Chuck, never forget. We're going to go down with this one eventually, but at least we'll go down as a team, eh? Another Pfizerite, Paul Fitzhenry, says that Rost's comments "impugn the integrity" of people inside the drug industry who've made the safety argument. Well, I'd hate to impugn anyone's integrity. How about their intelligence?
Now, I don't think that the drug-safety firewall is going to crumble tomorrow (not with things like this going on. But these findings are a direct consequence of one of the only weapons my industry has in the reimportation battle: limiting the supply of drugs to Canada. The Canadian pharmacies are turning to other countries, not all of them reliable.
This will work, for a while - but is it a weapon we want to be seen using? There's a real possibility that this will create shortages of some medicines in other countries as the supply problem cascades along. Do we want everyone to watch as we turn the spigot?
Economics. Drug reimportation is an economic issue, not a safety issue. We've allowed ourselves to get price-controlled into a corner, and we need to find a graceful way out of it. But instead, we're helping to saw through the floor. . .
+ TrackBacks (0) | Category: Drug Prices
September 28, 2004
Economist Mark Kleiman, in a clearheaded post on drug reimportation, says:
"No doubt, the politicians who are campaigning to permit pharmaceutical arbitrage are demagoging the issue by failing to mention the impact on innovation. But at least the argument they make that allowing arbitrage would reduce prices to consumers is more or less correct, and actually expresses their goal. The politicians who oppose arbitrage, by contrast -- including the Bush Administration -- largely try to hide behind the safety fig-leaf. Thats an insult to the intelligence of the voters."
Oh, yes indeed. My readers know that I've been banging on that particular washtub for a long time now, much good has it done. Here we go again, once more with feeling, from someone who works right here in the drug industry:
Canadian pharmaceuticals are safe. They're just as safe as ours. The reasons that reimporting them is a bad idea are economic ones. We need the money, and we've turned the US into the only place we can make it.
A more, um, detailed presentation of that point of view (from a passel of economists) can be found here. But the politics of the problem aren't in need of expert explication. Opponents of drug reimporation are trying to beat something (cheap drugs!) with nothing (no cheap drugs!) That's always a tough sell, so they - I can't bring myself to use the pronoun "we" - are trying to use irrelevant scare tactics instead. People are catching on
I think you can win a few short-term battles that way, at the cost of most surely losing the war. As I keep saying, the safety argument is one that can be addressed. As it will be, and where will my industry be then? Left wiping sweat from its forehead, stammering "But. . .but. . .there's actually another reason why this won't work. . ."
It'll be too late. The industry whose efforts I have devoted my adult working life to, the one that feeds my kids and keeps a roof over my head, is happily strapping itself to a barrel. It's making double-sure that the knots are firm and the cords are tight. The waterfall makes an awful, awful noise. . .
+ TrackBacks (0) | Category: Drug Prices
I don't usually miss my Monday posts, but time has been short recently. My car, the Rapacious Pharmamobile, has finally given up its last, and I need to replace it quickly so I can drive to work and help rake up the excess profits. In case you're wondering, the R. Pharmamobile was a 1989 Honda with 180,000 miles on it, which should give you some idea of the riches we enjoy.
To follow up on the last post, here are some other thoughts on performance rankings of scientists, also adapted from my old Lagniappe site. (I'll dust some of these off from time to time to get them into the categorized archives here.)
". . .there's another problem with employee rankings, one that doesn't just apply to research organizations. I first came across a statement of it while reading Bill James, who showed how it applies to baseball teams when they decide whether to bring in some veteran player to hold down a position or go with someone from triple-A. It's an over-reliance on normal distribution. (Here's a discussion of the idea.)
People have this mental picture of the classic bell curve - bulging middle, sloping down to the sides as it tails off to the few stars on the right, the few destructive losers on the left. In a departmental performance evaluation, you end up with most people getting "Meets Expectations" or the equivalent, some that rank higher, a few that rank lower. In my experience (two large drug companies,) most of the raw evaluations come back as "Meets" or higher, and it's rare that folks get initially ranked on the low end. That gets changed as the whole department comes into focus, though, because there's often this feeling that you have to rank some people low, in the same way that there have to be some star performers.
But here's the key: the performance ranking in any organization that is free to hire and fire its own employees will not fit a normal distribution. Why should it? A normal distribution is what you'd expect from a random sample, and I'll assume that most businesses don't hire or retain their employees at random. No, what you have is most likely the far right-hand side of a much larger distribution, the performance ratings of all the people you could have possibly hired for those positions.
One big factor that keeps things from being normally distributed is the entry barrier into a technical field. For the most part, you can't be stupefyingly incompetent and get a degree from a reasonably good research group at a reasonably good school, or be a total bozo and get in the door for a job interview with a decent resume and give a competent-sounding seminar. The total washouts are mostly gone by the time you place an ad in C&E News. Any of them that do send you resumes have a tougher time getting hired, and any of them that you actually hire have a tougher time being retained.
Another factor that bends the distribution is that people are actively trying to improve their rankings from year to year (well, at least some of them are.) No one's striving to slide down the list, that's for sure. The data points of a random sample aren't being told where they landed last time and given incentives to shift to the right.
So I'd say that a realistic batch of performance rankings has a majority at a "Meets Expectations" level, and the remainder stretching out toward the higher rankings. There really shouldn't be many "Below Expectations" people at all, because the whole point of a ranking like that is that they either shape up, or you ship them out.
This also points out the folly of the Jack Welch "rank 'em and yank 'em" style of performance review. You know, find your bottom 10% and fire them all. Other companies that have tried this technique (Ford, to pick a notable example) have found that it mostly sows fear and discord. And I'm sure it did at GE, too, truth be told (although some CEOs swear by it.)
The "bottom 10%" is basically identical to the 60 or 70% of your employees that are doing just fine, minus a few people having a bad review period (a different set each time,) and minus a few genuine losers. If you seriously try to fire this illusory bottom tier, you end up having to make arbitrary, meaningless distinctions on five-page HR forms in order to distinguish them. Find the real losers and heave them out, absolutely. But don't draw a ridiculous line in the sand and then jerk people around because of it. A really good manager should be able to fire someone without hiding behind a bad policy.
+ TrackBacks (0) | Category: Business and Markets
September 23, 2004
As we start to slide into fall, companies around the industry start to slide into that slough known as Performance Reviews. Depending on your calendar, this can start hitting you any time from now until February or so - it's particularly joyful when your company changes its system and you have to rank people based on, say, seven months of performance.
Researchers are a nightmare to evaluate at any time of the year. Here's something I wrote on my old site, Lagniappe, which I thought might be relevant:
". . .I should really mention one of the things that managers in research organizations would most love to measure: their employees. How good are they? How productive are they? How do they rank, from one to thirty-eight?
The problem is, there's no good way to measure any of this, not that it stops anyone from trying. Performance reviews are a notorious sinkhole for any industry, of course - every heard of a company where people say that their system works? But it's even harder to do for research employees, because of the dice-rolling feast/famine nature of the work.
Here are a few questions that come up regularly: Who's more valuable - the person who has the idea, or the person who reduces it to practice? What if several people had the idea at about the same time? What if the person who made the best compound in the project did it more or less by accident? What if they did it just because someone else told them to? What should be rated more highly - producing a long list of inactive compounds, or a short list of really good ones? What about someone who does really fine work on a project that disappears due to unexpected toxicity? What's more worthy of a high rating - producing new compounds, or figuring out a crucial step to make enough of the ones you already have?
And so on, and so on. So, how do you rank people? By the number of compounds they produce? That biases it, at best, toward people who (for whatever reason) ended up with a chemical series that was easier to ring variations on. At worst, it tilts the rankings toward people who deliberately banged out piles of easy-to-make compounds, even though they knew that they were unlikely to be worth anything.
OK, how about ranking everyone by the activity of the compounds they made? Well, that biases it toward people who are lucky, not to get too delicate about it. At best, it can reward someone who made some of their own luck, by sticking with a good idea. But it can also reward someone who tripped over a gold nugget on their way to pick up some more lumps of asphalt.
Ranking people by what everyone else thinks of them? That can bias it toward those with outgoing personalities. People on large projects who get more exposure will tend to come out better, too, as will people whose labs are on the way to the cafeteria.
Research is just plain hard to measure, and doing it on a regular, timed basis just exacerbates the problem. We spend long periods in this business being extremely wrong before suddenly being extremely right - try adjusting for that! As far as I've been able to see, any system you use will need exceptions, corrections, qualifications - just the kind of thing that numerical ranking was designed to avoid."
+ TrackBacks (0) | Category: Business and Markets | Life in the Drug Labs
September 22, 2004
Not that long ago, my laboratory was doing its usual thing, cranking out the potential Wonder Drugs and sending them downstairs for testing. The results come back in a batch, and we all check to see how everyone's compounds fared: "Hey, how come that one's good when the other one isn't?" "Man, I'm glad I didn't make that one down there. . ."
One of the folks in my lab had a series of reactions to make her compounds that had one tricky step in it - the group that was being added on could potentially end up on two or three different spots. You'd think that would be reason enough to avoid that route, and generally, it would be. But plenty of other compounds had been made that way on the project, and everything had been fine - only one isomer was produced, for some reason. We'd become pretty complacent about that step, actually. But this time, we were doing it on a somewhat different core structure - and yep, it was different all the way through.
This one made a mixture of two compounds in the reaction, one more than the other, and figuring out which was which took some doing. After staring at it for a while, we turned the problem over to our NMR specialists. They took the thing apart with all sorts of neat spectroscopic routines and came back with an answer: we'd made two of those potential isomers this time, as we'd figured, but the one we'd been shooting for was the minor one.
Well, that's chemistry as my colleague and I are used to it. And since both of us have been around the industry a few years, we decided that she should just go on with both of them. She elaborated the molecules through to the last step, and then we sent them downstairs to be assayed for activity. What the hey - she'd made the stuff anyway, and we might as well get some use out of it. (Of course, we had no idea what the target protein would think of this rearranged layout in the molecule.)
When we got the results back a few days later, it turned out (you saw this coming) that the new "wrong" structure was about five times more active than the "right" isomer. Oh, we know what we're doing, all right, we steely-eyed drug designers. Never believe it's not so.
+ TrackBacks (0) | Category: Life in the Drug Labs
September 20, 2004
No blogging time left this evening, and tomorrow night will be blank as well. I'm giving a talk to the local section of the American Chemical Society, so I'll be off scarfing up a free meal and enlightening whoever shows up. If you turn down too much free food, the ACS revokes your membership - at least, that's how I've always understood it to work.
Instead of preparing for my talk this evening, I've been outside in the back yard observing a passing visitor to the solar system, Toutatis. I found it without much trouble at all, to my surprise, and it was intruiging (and a bit alarming) to note its position, go off to other parts of the sky for an hour, and come back to find that it had noticably moved against the stars. I'm just glad that we're not getting any better view than we are; it's coming close enough for me as it is.
Anyway, I hope to have some time tomorrow to think about my presentation. Otherwise, I'm going to be extra coherent. . .
+ TrackBacks (0) | Category: Blog Housekeeping
The August issue of Nature Reviews: Drug Discovery has an alarming article on the attrition rates in drug development. I often get questions about these figures, and it's good to have a fresh look at the data. Among the ten largest pharma companies, in the period 1991-2000, here's the breakdown:
38% of the drugs taken in the the clinic dropped out in Phase I (safety / blood levels.)
60% of those remaining failed in Phase II (basic efficacy.)
40% of the remaining candidates failed in Phase III (big, expensive efficacy.)
And 23% of the ones that made it through the clinic failed to be approved by the FDA.
You can do the math as quickly as I can: that translates to about a 11% success rate from starting in the clinic. And consider that for someone like me, back in the research labs, a successful program is one that makes it to Phase I. It's no wonder that so few medicinal chemists have ever worked on a drug that's made it all the way to market!
The other thing to keep in mind, in light of last Friday's post, is that the money spent on these things grows terribly along the way. A failure in Phase I isn't pretty, considering the time and money spend in the preclinical period (aka: what I spend all my working life doing.) But a failure in Phase III or at FDA time is a financial disaster.
+ TrackBacks (0) | Category: Drug Development
September 19, 2004
I promise, this week the blog won't be all-NIH-all-the-time. There are plenty of other things to talk about, but if I list them, I won't get around to writing them. I've done that several times over the last two years, and I think that I'm catching on to how that part of my brain works. Blocking up on that kind of thing doesn't seem to bode well for a career as a Famous Journalist, but if I'm not mistaken, others have overcome even more difficult mental habits.
There's an analogy to an odd thing that I do in the lab. Starting in graduate school, I noticed that if I opened a solvent bottle while I was concentrating on something else, I would tend to lose the cap for it. After searching around the bench for a while, I'd find that, for some reason, I would tend to put the cap right on top of another identical solvent bottle, resting on top of its cap. This was a pretty good technique for making it completely disappear, as you can imagine.
It took a few of those, but I finally realized that this was my technique - or at least the technique of whatever region of my brain was delegated to do that kind of thing. So I took it into account. Now when I have a missing stopper or cap, I know just where to look, and most of the time I'm right. I still do it, all the time, but at least it doesn't slow me down. We'll see if this latest technique does the trick. Stick around this week and find out. . .
+ TrackBacks (0) | Category: Blog Housekeeping
September 16, 2004
OK, I couldn't resist. Let me reiterate that I completely admire the NIH's commitment to basic research; it's one of the real drivers of science in this country. But they're not a huge factor in clinical trials. Academia does more basic research than pharma; pharma does more clinical work than academia. Here are some statistics from a reader e-mail:
"As a person who was an NIH staffer (funding clinical trials, no less) and is now on the pharma side (mostly spending on manufacturing development; we will spend more on clinical trials as we get bigger), I have seen both sides.
Most of NIH spending is very far from clinical utility. Last time I checked (and it has been a while), more than 90% of NIH funds went to what most people would consider non-clinical research, e.g., studies of animals and cells, etc. (If the NIH was named by its major function, it would probably be called the National Institutes of Molecular Biology ;-) The reason NIH is able to claim that half of its money goes to clinical research' is that any study that involves a human or *human tissues* counts. So a bench study looking at receptors on human renal cells counts as 'clinical research.' The number of studies examining 'whole' humans is in the 5% range.
On the other hand, pharma, as you know, spends a lot of money on research with legal (protecting patent claims), manufacturing (cGMP issues, etc.) and marketing goals that don't necessarily help anyone's health.
Regarding the clinicaltrials.gov numbers, by my reckoning the 8000 NIH studies and the 2400 'industry' studies probably represent about the same investment in *therapeutic* clinical trials. If you break down the NIH trials, about 1800 (22%) are Phase I, 3000 (37%) are Phase II, 1100 (14%) are Phase III, and the rest (2150, 27%) are observational and other. (If you want to check, I did a search within the results for the appropriate phrases and subtracted from the total for the remainder). Figures for industry are 460 (19%) Phase I, 1060 (44%) Phase II, 770 (32%) Phase III, and 133 (5%) other.
In my experience each phase of clinical trials multiplies costs by about 10 times (e.g., Phase I = X; Phase II = 10X, Phase III = 100X), so the clinicaltrials.gov figures imply that the costs of Phase I, II, and III trials funded by industry are over 80% of those funded by NIH (costs are overwhelmingly driven by Phase III trials). And this is despite the close to 100% capture of NIH trials versus the unknown percentage capture of industry trials that you noted in your post."
+ TrackBacks (0) | Category: Academia (vs. Industry) | Clinical Trials | Drug Development
September 14, 2004
OK, one more on this topic before moving on to other things for a while. The Bedside Matters medblog has a better roundup of the reactions to my post than I could have done myself. And "Encephalon" there also has one of the longer replies I've seen to my initial post, worth reading in full.
I wanted to address a few of the issues that it raises. Encephalon says:
"Dr. Lowe makes his point with the sort of persuasive skill one suspects is borne of practice - I shouldn't be surprised if he has had to make his case to the unbelieving on a very regular basis. And that case is this: that pharmaceutical companies do in fact spend enormous sums of money in developing the basic science breakthroughs first made in academic labs to the point where meaningful therapeutic products (ie, '$800 mil' pills) can be held in the palms of our doctors' hands, ready to be dispensed to the next ailing patient.
So far as that claim goes, I don't think any reasonably informed individual would dispute it. . ."
It tickles me to be called "Doctor" by someone with a medical degree. On the flip side, though, it's a nearly infallible sign of personality problems when a PhD insists on the honorific. And I appreciate the compliment, but it's only fairly recently that I've had to defend this point at all; I didn't even know it was a matter of debate. The thing is, you'd expect that a former editor of the New England Journal of Medicine would be a "reasonably informed individual", wouldn't you? I don't think we can take anything for granted here. . .
He then spends a lot of time on the next point:
"It is a myth, and I would argue a more prevalent one than the myth that Big Pharma simply leaches off government-funded research, that the NIH does little to bring scientific breakthroughs to the bedside (once they have made them at the bench). . .Using arguably one of the best (databases) we've got (the NIH's ClinicalTrials.gov**) we get the following figures: of the 15,466 trials currently in the database, 8008 are registered as sponsored by NIH, 380 by 'other federal agency', 4656 by 'University/Organization', and 2422 by Industry. While I am suspicious that the designation 'university/organization' is not wholly accurate, and may represent funding from diverse sources, and while the clinical trials in the registry are by no stretch of the imagination only pharmaceutical studies, the 8388 recent trials sponsored by Federal agencies are no negligeable matter. I think Dr. Lowe will agree.""
I agree that NIH has a real role in clinical trials, but I don't think it's a large as these figures would make you think. Clinicaltrials.gov, since it's an NIH initiative, is sure to include everything with NIH funding, but there are many industry studies that have never shown up there. (And I share the scepticism about the "University" designation.) When the Grand Clinical Trial Registry finally gets going, in whatever form it takes, we can get a better idea of what's going on. I also think that if we could somehow compare the size and expense of these various trials, the Pharma share would loom larger than the absolute number of trials would indicate.
Encephalon goes on to worry that I'm denigrating basic research: "The impression a lay person would get reading Dr. Lowe's 'How it really works' is that basic science work done by the NIH is really quite trivial. I don't think he meant this. . ."
Believe me, I certainly didn't. Without basic biological studies, there would be nothing for us to get our teeth into in the drug industry. If we had to do them all ourselves, the cost of the drugs we make would be vastly greater than it is now. It's like the joking arguments that chemist and pharmacologists have in industry: "Hey, you guys wouldn't have anything to work on if it weren't for us chemists!" "Well, you'd never know if anything worked if it weren't for us, y'know!" Academia and industry are like that: we need each other.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Clinical Trials | Drug Development
September 13, 2004
Here's another example of academica and industry, and how it can be hard to divide out the credit. There's a family of nuclear receptor proteins known as PPARs, a very important (and difficult to unravel) group. The whole field got started years ago, when it was noticed that some compounds had a very particular effect on the livers of rats and mice: they made the cells in them produce a huge number of organelles called peroxisomes.
Eventually, a protein was found that seemed to mediate this effect, and it was called the Peroxisome Proliferator-Activated Receptor, thus PPAR. It was thought that there might be some other similar proteins. At this point, their functions were completely unknown.
Meanwhile, off at a Japanese drug company, a class of compounds (thiazolidinediones) had been found to lower glucose in diabetic animal models. The original plan, if I recall correctly, had been to stich together a dione compound with a Vitamin E structure, and as it turns out the reasoning behind this idea was faulty in every way. But the Japanese group had hit on a whole series of interesting structures that lowered glucose in a way that had never been seen before. No one had a clue about how they worked, but all sorts of theories were proposed, tested, and discarded.
The activity was unusual enough that many other drug companies jumped into the thiazolidinedione game. It turned out, as various companies sought out patentable chemical space, that the Vitamin-E-like side chain wasn't essential, but the thiazolidinedione head group was a good thing to have. (It's since been superseded.) The Japanese group was in the lead, with a compound that was eventually named trogliazone, but SmithKline Beecham (as it was then) and Eli Lilly weren't far behind, with rosiglitazone and pioglitazone. There were a number of contenders from other companies fell out of the race for various reasons. The three left standing went all the way into human trials, and no one still had any idea of how they worked.
We're up to the early 1990s now. Off in another part of the scientific world, a number of research groups were digging into PPAR biology. It looked like there were three PPARs, designated alpha, gamma, and delta (known as PPAR beta in Europe.) They all had binding sites that looked like small molecules in the cell should fit into them, but no one had really established what they might be. All three seemed as if they might be important in pathways dealing with fatty acids, not that that narrows it down very much.
As best I can reconstruct things, in a very short period in the mid-1990s, it became clear that PPAR gamma was a big player in fat cells (adipocytes). Many labs were working on this, but two academic groups that were very much in the thick of things (and still are) were those of Bruce Spiegelman from Harvard and Ron Evans from the Salk Institute. Then a group at Glaxo Wellcome (as it was then), also doing research in the field, found out that the glitazone drugs were actually ligands for PPAR-gamma, and immediately hypothesized that it was the mechanism by which they lowered glucose. From what I've been told, Glaxo's management didn't immediately believe this, but it turned out to be right on the money. Glaxo is still a major player in the PPAR world, turning out a huge volume of both basic and applied research.
All three PPAR-gamma drugs made it to market. So, who gets the credit? It's hard enough to figure out even inside the academic sphere - the two groups I mentioned had plenty of competition here and abroad, and insights came from all over. But (as far as I can tell) none of them were the first to make the connection between PPAR-gamma and diabetes therapy. So does Glaxo get the credit (they do have a few key patents to show for it all.)
And if we're doling out credit, who's going to line up for blame? As it happened, the very first PPAR-gamma compound to market, troglitazone, showed some unexpected liver toxicity once it found a broader audience. It was eventually pulled from the market in a hail of lawsuits. Rosiglitazone and pioglitazone (Avandia and Actos, by brand) are still out there, having survived the loss of the first compound, but not without a period of suspicion and breath-holding.
Any more troubles to share? Later PPAR drugs have shown all kinds of weird effects, including some massive clinical failures late in human trials. The money that's been made from the two on the market probably hasn't made up yet for all the cash that the industry has spent trying to figure out what's going on, and the story takes on more complexity every year. (Glaxo, for their trouble, has never made a dime off one of their own PPAR compounds.)
It's to the point now that some companies are, it seems, throwing up their hands about the whole field, while others continue to plow ahead. And by now, the number of research papers from academia will make your head hurt. PPARs seem to be involved in everything you can imagine, from diabetes to cancer to wound healing, and who knows what else. The whole thing is going to keep a lot of people busy for a long time yet. And anyone who thinks they can clearly and fairly apportion the credit, the spoils, the blame and the Bronx cheers is dreaming.
+ TrackBacks (0) | Category: Academia (vs. Industry)
September 12, 2004
My long cri de couer last week continues to bring in a number of comments, which I appreciate. Matthew Holt of the Health Care Blog asks:
How much money does the NIH spend on basic research and how much does the pharma business spend on it (and you can include development if you like)? I don't have these numbers but I suspect they are closer to each than it would appear from a reader of your article who might think that it's about 90-10 on pharma's side."
Well, I hope that's not how I came across. I'm sure that more basic research goes on in academia, of course. That's what they're funded for, and what they're equipped for. Some basic work goes on in the drug industry, too, but most of our time and effort is spent on applied research. It's confusion about the differences between those two (or an assumption that the basic kind is the only kind that counts) that leads to the whole "NIH-ripoff" idea.
It's easy to get NIH's budget figures, but it's next to impossible to get the drug industry's. One good reason is that companies don't release the numbers, but there's a more fundamental problem. It would even hard to figure it out from inside a given company, with access to all the numbers, because you can easily slip back and forth between working on something that applies only to the drug candidate at hand and working on something that would be of broader use.
Some years ago, several companies (particularly some European ones) had "blue-sky" basic research arms that cranked away more or less independently of what went on in the drug development labs. I can think of Ciba-Geigy (pre-Novartis) and Bayer as examples, and I know that Roche funded a lot of this sort of thing, too. In the US, DuPont's old pharma division had a section doing this kind of thing as well. I'm not sure if anyone does this any more, though. In many cases, the research that went on tended to either be too far from something useful, or so close that it might as well be part of the rest of the company.
So without a separate budget item marked "basic research", what happens is that it gets done here and there, as necessary. I can give a fairly trivial example: at my previous company, I spent a lot of time making amine compounds through a reaction called reductive amination. I used a procedure that had been published in the Journal of Organic Chemistry, a general method to improve these reactions using titianium isopropoxide. It worked well for me, too, giving better yields of reactions that otherwise could be hard to force to completion.
The original paper on it came from a research group at Bristol-Meyers Squibb. They had been looking for a way to get some of these recalcitrant aminations to go, and worked this one out. That is a small example of basic research - not on the most exalted scale, but still on a useful one. It's not like BMS had a group that did nothing but search for new chemical reactions, though. They were trying to make specific new compounds, applied research if there ever was some, but they had to invent a better way to do it.
Meanwhile, I needed some branched amines that this reaction wouldn't give me, and there wasn't a good way to make them. I thought about the proposed mechanism of the BMS reaction and realized that it could be modified as well. Adding an organometallic reagent at the end of the process might form a new carbon-carbon bond right where I needed it. I tried it out, and after a few tweaks and variations I got it to work. As far as I could see from searching the chemical literature, no one had ever done this in this way before, and we got a lot of use out of this variation, making a list of compounds that probably went into the low thousands.
When I was messing around with the conditions of my new reaction, trying to get it to work, I was doing it with intermediate compounds from our drug discovery program, and when the reactions produced compounds I submitted them for testing against the Alzheimer's disease target we were working on. Basic research or applied? Even though there are clear differences between the two, taken as classes, the border can be fuzzy. One's blue and one's yellow, but there's green in between.
Tomorrow I'll go over a more important example - it's pretty much basic research all the way, but untangling who figured out what isn't easy. My readers who work in science will be familiar with that problem. . .
One other thing, in response to another comment: I didn't go wild about the NIH argument because I'm trying to prove that drug companies are blameless servants of the public good or something. We're businesses, and we do all kinds of things for all kinds of reasons, which vary from the altruistic to the purely venal. You know, like they do in all other businesses. Nor is it, frankly, the largest or most pressing argument about the drug industry right now.
No, the reason I took off after it is that it's so clearly mistaken. Anyone who seriously holds this view is not, in my opinion, demonstrating any qualifications to being taken seriously. (And that goes for former editors of the New England Journal of Medicine, too, a position that otherwise would argue for being taken quite seriously indeed.) The "all-they-do-is-rip-off-academia" argument is so mistaken, and in so many ways, that it calls into question all the other arguments that a person advocating it might make. They are talking about the pharmaceutical industry, seriously and perhaps with great passion, but they do not understand what it does or how it works at the most basic level. Isn't that a bit of a problem? What other defects of knowledge or reasoning are waiting to emerge, if that one has found a home?
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Drug Industry History
September 9, 2004
So is this the attitude we're up against? Here's a thread on Slashdot on the clinical trial disclosure issue - titled, I note in light of yesterday's post, "Medical Journals Fight Burying of Inconvenient Research". My favorite verb again! The comments range from the insightful to the insipid (for another good reaction to the clinical trial controversy, go here.)
A comment to the original Slashdot item disparages the idea that NIH is the immediate source of all drugs, and recommends reading my site, both of which actions I appreciate. But the first response to that was:
"No, (NIH-funded labs) just do the basic research that results in the drug leads. The companies then do the expensive but scientifically easy trials and rake in all the money (and now it seems, the credit as well)."
Wrong as can be, and in several directions at once. In a comment below, blogger Sebastian Holsclaw urges that we take this kind of talk seriously because it's more widespread than we think. I'm afraid that he might be right. The problem is that many people don't seem to understand what it is that people like me do for a living. I think that there must be plenty who don't even grasp how science works in general. Allow me to go on for a while to explain the process - I'd appreciate any help readers can provide in herding the sceptics over to read it.
Try this: If Lab C discovers that the DooDah kinase (a name I whose actual use I expect any day now) is important in the cell cycle, and Lab D then profiles its over-expression in various cancer cell lines, you can expect that drug companies will take a look at it as a target. Now, the first thing we'll do is try to replicate some of the data to see if we believe it. I hope that I'm not going to shock anyone by noting that not all of these literature reports pan out.
But let's assume that they do this time, making DooDah a possible cancer target. What then? If we decide that the heavy lifting has been done by the NIH-funded labs C and D, then what do we have so far? We have a couple of papers in the Journal of Biological Chemistry (or, if the authors are really lucky, Cell) that, put together, say that DooDah kinase is a possible cancer target. How many terminally ill patients will be helped by this, would you say? Perhaps they can read about these interesting in vitro results on their deathbeds?
What will happen from this point? Labs C or D may go on to try to see what else the kinase interacts with and how it might be regulated. What they will not do is try to provide a drug lead, by which I mean a lead compound, a chemical starting point for something that might one day be a drug. That's not the business these labs are in. They're not equipped to do it and they don't know how.
(Note added after original post): This is where the drug industry comes in. We will try to find such a lead and see if we can turn it into a drug. If you believe that all of what follows still belongs to the NIH because they funded the original work on the kinase, then ask yourself this: who funded the work that led to the tools that Labs C and D used? What about Lab B, who refined the way to look at the tumor cell lines for kinase activity and expression? Or Lab A, the folks that discovered DooDah kinase in the first place twenty-five years ago, but didn't know what it could possibly be doing? These things end up scattered across countries and companies. And all of these built on still earlier work, as all the work that comes after what I describe will build on it in turn. That's science, and it's all connected.
Here in a drug company, we will express the kinase protein - and likely as not we'll have to figure out on our own how to produce active enzyme in a reasonably pure form - and we'll screen it against millions of our own compounds in our files. We'll develop the assay for doing that, and as you can imagine, it's usually quite different than what you'd do by hand on the benchtop. Then we'll evaluate the chemical structures that seemed to inhibit the kinase and see what we can make of them.
Sometimes nothing hits. Sometimes a host of unrelated garbage hits. For kinases, these days, these usually aren't the case - owing to medicinal chemistry breakthroughs achieved by various drug companies, let me add. So if we get some usable chemical matter, then I and my fellow med-chemists take over, modifying the initial lead to make it more potent, to increase its blood levels and plasma half-life when dosed in animal models, to optimize its clearance (metabolism by the liver, etc.), and make it selective for only the target (or targets) we want it to hit. Often there are toxic effects for reasons we don't understand, so we have to feel our way out of those with new structures, while preserving all the other good qualities. It would help a great deal if the compounds exist in a form that's suitable for making into a tablet, and if they're stable to heat, air, and light. They need to be something that can be produced by the ton, if need be. And at the same time, these all have to be structures that no one else has ever described in the history of organic chemistry. To put it very delicately, not all of these goals are necessarily compatible.
I would love to be told how any of this comes from the NIH.
Now the real work begins. If we manage to produce a compound that does everything we want, which is something we only can be sure of after trying it in every model of the disease that you trust, then we put it into two-week toxicity testing in animals. Then we test in more (and larger) animals. Then we dose them for about three months. Large whopping batchs of the compound have to be prepared for all this, and every one of them has to be exactly the same, which is no small feat. If we still haven't found toxicity problems, which is a decision based on gross observations, blood chemistry, and careful microscopic examination of every tissue we can think of, then the compound gets considered for human trials. We're a year or two past the time we've picked the compound by now, depending on how difficult the synthesis was and how tricky the animal work turned out to be. No sign of the NIH.
The regulatory filing for an Investigational New Drug needs to be seen to be appreciated. It's nothing compared to the final filing (NDA) for approval to market (we're still years and years away from that at this point), but it's substantial. The clinical trials start, cautiously, in normal volunteers at low doses, just to see if the blood levels of the compound are what we think, and to make sure that there's no crazy effect that only shows up in humans. Then we move up in dose, bit by bit, hoping that nothing really bad happens. If we make it through that, then it's time to spend some real time and money in Phase II.
Sick patients now take the drug, in small groups at first, then larger ones. Designing a study like this is not easy, because you want to be damn sure that you're going to be able to answer the question you set out to. (And you'd better be asking the right question, too!) Rounding up the patients isn't trivial, either - at the moment, for example, there are not enough breast cancer patients in the entire country to fill out all the clinical trials for the cancer drugs in development to treat it. Phase II goes on for years.
If we make it through that, then we go on to Phase III: much, much larger trials under much more real-world conditions (different kinds of patients who may be undergoing other therapy, etc.) The amount of money spent here outclasses everything that came before. You can lose a few years here and never feel them go by - the money that you're spending, though, you can feel. And then, finally, there's regulatory approval and its truckload of paperwork and months/years of further wrangling and waiting. The NIH does not assist us here, either.
None of this is the province of academic labs. None of it is easy, none of it is obvious, none of it is trivial, and not one bit of it comes cheap. We're spending our own money on the whole thing, betting that we can make it through. And if the idea doesn't work? If the drug dies in Phase II, or, God help us all, in Phase III? What do we do? We eat the expense, is what we do. That's our cost of doing business. We do not bill the NIH for our time.
And then we go do it again.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Clinical Trials | Drug Development
September 8, 2004
I haven't been covering all the twists of the clinical-trial-disclosure story, because there have been so many of them. The drug industry is proposing its own plan, various companies are jumping out with theirs, the big medical journals have another one, and it won't be long before Congress sticks its oar in, too. Clearly there's still some wrangling to come - but equally clearly, we're going to get some sort of meaningful clinical trial data repository.
And as I've blogged here, I don't necessarily have a problem with that, although some of the ">details concern me. My problem, speaking as someone who pays his mortgage with ill-gotten loot from the rapacious drug industry, is with how we've handled the whole thing: poorly.
The verb that almost every story has used is "bury." The drug makers will no longer be able to bury their failed trials, the buried data will now have to be made public, and so on and so on. That's right, we take the data and stick it in a hollow tree stump. You would never know that every clinical trial in the US has to be registered with the FDA (or the equivalent authority in the case of offshore studies.) And you'd never guess that if we want the FDA to act, we have to submit all our clinical data, bad and good.
(Now, a situation where we could indeed use more transparency is when a trial is run, but the company decides that the results weren't good enough to support some new FDA action (a labeling extension, most of the time.) Then the results don't see the light of day, although I think that they should. But even then, the FDA knows that a trial was run.)
Where has my industry been while we've been pummeled in the press? Issuing press releases that nobody believes or even reads? Our industry organization's home page is a sinkhole of grinning publicity head shots and soft-focus stock pictures of cute babies. Find someone who can stand to look at it for two minutes, and I'll show you someone with a stronger stomach than I have. Why isn't our side of the story getting out?
+ TrackBacks (0) | Category: Clinical Trials
September 7, 2004
Update: (Much) more on this topic here.
Readers are already asking me if I've read Marcia Angell's new book, "The Truth About the Drug Companies." I think that some of you are trying to do me in, hoping I'll completely throw a piston rod or something. It's a real possibility. Angell's take on my industry profoundly irritates me. Here's Janet Maslin's New York Times review of the book from yesterday. She finds the book to be "tough, persuasive, and troubling" although she says that Angell is "likely to be on the receiving end of some angry rebuttals."
I'm glad to pitch in on that worthy effort, but I'm going to try to do it in a larger (and larger-paying!) forum. The thing is, the drug industry deserves some criticism. I've handed out a bit myself. Maslin's review points out Angell's angry words about Claritin/Clarinex and Prilosec/Nexium - hey, join the flippin' club, Marcia. But in this kind of book, the worthwhile stuff gets buried in all the flying horse manure.
To pick just one road apple, Angell is a great fan of the "All The Drug Companies Do Is Rip Off NIH" line - which, I can tell you, most folks in the drug industry have never heard of. (And you should see the expressions on their faces when they do.) There are people who call for drug companies to immediately give up every penny they've made on any marketed drug that had anything to do with an NIH grant. I've spoken about this before, and I'm sure that I will again, but for now, I have just one question:
Can we get reimbursed for all the ones that didn't work?
+ TrackBacks (0) | Category: Press Coverage | Why Everyone Loves Us
September 2, 2004
What sort of markets breed multiple therapies? The ones with the largest number of potential (paying) patients, for one thing, which shouldn't surprise anyone. Fortunately, that also corresponds pretty well with the markets that could be better served with another drug. The success of the first compound in a new market shows that it can be done, and proves that there's money to be made, and that attracts more companies to the field. The path has been cleared, up to a point - but remember, all the following compounds have to go through the same degree of efficacy and safety testing as the first one did, although the later entries at least know, broadly, how to set up their clinical trials.
But entering an existing market is no guarantee of riches. There are no guarantees of riches. Look at Bristol-Meyers Squibb and their statin, Pravachor, which has been weighed in the balance against Lipitor and been found wanting (with BMS paying for the whole process). Now what? You can be sure that AstraZeneca is beavering away, trying to show that their big statin hope (Crestor) has even more of the effects that Lipitor has shown. It'll need to.
Or look at the erectile dysfunction field. Pfizer showed that it could be a moneymaker with Viagra, and don't believe for a minute that this was obvious beforehand. There were plenty of sceptics, doubtless some of them within Pfizer itself. But Viagra's success attracted a lot of interest around the industry, since many companies had a stable of potential PDE inhibitor leads. Now Bayer (partnered with Glaxo SmithKline) and Icos (partnered with Lilly) are in the field with Levitra and Cialis, respectively. A glance at your e-mail spam will have already familiarized you with both compounds, in case you've somehow missed the massive advertising campaigns.
The hope was the the new compounds, which differ from Viagra (and each other) in side effects, onset, and duration of action, would expand the market even more to people who had never tried Viagra. But as far as anyone can see, neither of them has been the as big a hit as the companies involved might have hoped for. It looks like the size of the market is still roughly the same, only now there are three compounds beating each other up inside the same box. The promotional costs for everyone involved are not trivial, and they would have been a lot easier to bear if a lot more patients had decided to try the therapies for the first time. It doesn't seem to have happened, at least not yet. Those are the breaks. We make money in this business, true, but if you think it comes easy, well, come on down with your leaf blower and scoop some up.
+ TrackBacks (0) | Category: "Me Too" Drugs | Cardiovascular Disease
September 1, 2004
Tonight, a few varied links from around the blogging world, which only serve to remind me that I need to reconstitute my shattered blogroll:
Via Chad Orzel I read this note from Preposterous Universe on publication of clinical trial data, and on the general problem of what to do with negative results. I know that there have been attempts to start journals in chemistry to allow an outlet for these, but I don't think that any of them have worked out.
Meanwhile, over at DB's MedRants, he has a good piece on drug reimportation from a practicing physician's point of view:
"The Canadian "solution" makes for good politics, but bad policy. This "solution" is destined to fail. We need higher level thinking to better understand pharmaceutical costs and our resultant expenditures. The "wonder drugs" are not created by spontaneous combustion. They result from expensive research.
Physicians need to understand newer drugs very well. We need to understand when an expensive drug is a better alternative, and when a cheaper generic works as well. We need the NIH (and associated Institutes) to sponsor important drug studies. Relying on the pharmaceutical industry to fund drug studies seems cost effective in the short run, but from a long term perspective, such studies are rarely designed to answer the important cost questions."
Readers will recall my attempts to warn Imclone shareholders of the risks of holding their stock at its current levels. Over at the Motley Fool, another attempt is made:
"But the thing that gets me about ImClone is that the company is valued as though Erbitux has already been a smashing success, as if it has already received approval for other therapies currently in trial such as earlier-stage colorectal cancers and head and neck cancers, extending its on-label marketing reach. At an enterprise value of $4.1 billion, ImClone has grossly improved both its operating and financial positions from the days of scorn and scandal when Sam Waksal ran the show. Even if the annual domestic sales for Erbitux hit the $1 billion mark -- blockbuster status, if you will -- ImClone garners only 39% of that amount, or $390 million. That's revenues, and yet the company's stock is priced 10 times that high today."
All true. But good luck getting those points across - it's like trying to reason with house cats. Turning from those to dogs, the world's most famous combination dog-breeding inorganic chemist has this to say about writing review articles covering the chemical literature:
"Why I do them I don't know. They're exhausting, I get no real feeling of satisfaction out of them, the remuneration doesn't begin to repay the effort involved and while the reader may appreciate the compliation of the state of the art, no credit accrues to the author. I suppose that since publications on original research are frowned on by my company, I do reviews to pad my resume. People can't read, but they can count."
True again. We have an easier time publishing original research, but only after the project is either ascending into the heavens of the market (which doesn't happen often enough, I can tell you) or is a complete dead letter. "The only thing we can do with that," we say, poking at the remains, "is to publish it."
+ TrackBacks (0) | Category: Business and Markets | The Scientific Literature