About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
March 31, 2011
After my post the other day on the NIH neurological disease effort, I heard from Rebecca Farkas there, who's leading the medicinal chemistry effort on the program. She's glad to get feedback from people in the industry, and in fact is inviting questions and comments on the whole program. Contact her at farkasr-at-ninds-dot-nih-dotgov (perhaps putting the address in that form will give the spam filters at NIH a bit less to do than otherwise).
She also sends word that they'll be advertising soon for a Project Manager position for this effort, and is looking for suggestions on how to reach the right audience for a good selection of candidates. This post might help a bit, but she's interesting in suggestions on where to advertise and who to contact for good leads.
+ TrackBacks (0) | Category: Drug Development | The Central Nervous System
Venture-capital guy Bruce Booth has a provocative post, based on experience, about how reproducible those papers are that make you say "Someone should try to start a company around that stuff".
The unspoken rule is that at least 50% of the studies published even in top tier academic journals – Science, Nature, Cell, PNAS, etc… – can’t be repeated with the same conclusions by an industrial lab. In particular, key animal models often don’t reproduce. This 50% failure rate isn’t a data free assertion: it’s backed up by dozens of experienced R&D professionals who’ve participated in the (re)testing of academic findings. This is a huge problem for translational research and one that won’t go away until we address it head on.
Why such a high failure rate? Booth's own explanation is clearly the first one to take into account - that academic labs live by results. They live by publishable, high-impact-factor-journal results, grant-renewing tenure-application-supporting results. And it's not that there's a lot of deliberate faking going on (although there's always a bit of that to be found), as much as there is wishful thinking and running everything so that it seems to hang together just well enough to get the paper out. It's a temptation for everyone doing research, especially tricky cutting-edge stuff that fails a lot of the time anyway. Hey, it did work that time, so we know that it's real - those other times it didn't go so smoothly, well, we'll figure out what the problems were with those, but for now, let's just write this stuff up before we get scooped. . .
Even things that turn out to be (mostly) correct often aren't that reproducible, at least, not enough to start raising money for them. Booth's advice for people in that situation is to check things out very carefully. If the new technology is flaky enough that only a few people can get it to work, it's not ready for the bright lights yet.
He also has some interesting points on "academic bias" versus "pharma bias". You hear a lot about the latter, to the point that some people consider any work funded by the drug industry to be de facto tainted. But everyone has biases. Drug companies want to get compounds approved, and to sell lots of them once that happens. Academic labs want to get big, impressive publications and big, impressive grants. The consequences of industrial biaes and conflicts of interest can be larger, but if you're working back at the startup stage, you'd better keep an eye on the academic ones. We both have to watch ourselves.
Update: by request, here's a translation of this page in Romanian
+ TrackBacks (0) | Category: Academia (vs. Industry)
March 30, 2011
Most interesting - here's the FDA's latest statement on Makena, in response to KV Pharmaceuticals sending letters to compounding pharmacies telling them to stop providing the drug, now that they have regulatory approval and market exclusivity:
. . .Because Makena is a sterile injectable, where there is a risk of contamination, greater assurance of safety is provided by an approved product. However, under certain conditions, a licensed pharmacist may compound a drug product using ingredients that are components of FDA approved drugs if the compounding is for an identified individual patient based on a valid prescription for a compounded product that is necessary for that patient. FDA prioritizes enforcement actions related to compounded drugs using a risk-based approach, giving the highest enforcement priority to pharmacies that compound products that are causing harm or that amount to health fraud.
FDA understands that the manufacturer of Makena, KV Pharmaceuticals, has sent letters to pharmacists indicating that FDA will no longer exercise enforcement discretion with regard to compounded versions of Makena. This is not correct.
In order to support access to this important drug, at this time and under this unique situation, FDA does not intend to take enforcement action against pharmacies that compound hydroxyprogesterone caproate based on a valid prescription for an individually identified patient unless the compounded products are unsafe, of substandard quality, or are not being compounded in accordance with appropriate standards for compounding sterile products. As always, FDA may at any time revisit a decision to exercise enforcement discretion.
The agency does not quite make clear that the "unique situation" might be, although they do mention the amount of work done by NIH-funded researchers that was part of the approval package. The FDA has, of course, no authority on pricing - but they do have other means at their disposal, and this is one of them. KV must be wondering at this point what, exactly, the phrase "market exclusivity" might mean. (The answer, for better or worse, is that it, and other statuatory language, means whatever the regulatory authorities want it to mean, at least until something goes to the courts. Then it means whatever the courts want it to mean).
Overall, I think that this is a good thing, since (as I've said before) I think that the law in this case is providing a bit too much incentive, considering the relatively small risks involved in bringing progesterone caproate into the modern regulatory world. It worries me, though, that the FDA is making it so explicit that they plan to pick and choose which laws to enforce and how strictly they're going to enforce them. But honestly, it's always been this way, and a no-exceptions letter-of-the-law approach leads to craziness of its own. In this case, I think that clarifying the hazards of pushing things as hard as they can possibly be pushed will help make future business plans in this area a bit more realistic.
+ TrackBacks (0) | Category: Drug Prices | Regulatory Affairs
Ah, insider trading. It's the province of Wall Street types in really expensive shirts, right? Like in the movies? Well, read on.
Even the most clueless know that you're not supposed to trade on material nonpublic information, and the only really fuzzy part is what constitutes material information. A lawyer once told me that if you're an employee of a company, material information is "anything that makes you think about trading the stock". That's a pretty intelligent rule, and one that the recent Matrixx Supreme Court decision would seem to have reaffirmed. If someone could think it's nonpublic material information, odds are that it is.
In the drug business, the hottest potatoes in this category are the results of clinical trials and FDA decisions. People (a very short, well-defined, and well-paperworked list of people) inside a given company know the first news before anyone else, and people inside the FDA get to hear about the second. And there is no way that you can act on such information legally before it's released. Those tempted to try realize that, of course, and act accordingly.
They do, in fact, what Cheng Li Yiang (a chemist, regrettably) and his son Andrew Liang were accused yesterday of doing since 2006: they used the accounts of at least least seven other people to trade on knowledge of FDA approval decisions, pulling in over three million dollars in the process. The single biggest winner (over $1 million) appears to have been front-running the surprise approval of Vanda Pharmaceutical's Fanapt in 2009. It wouldn't surprise me if this was the one that blew up the whole business. That was such an unexpected move by the FDA (after which the stock went up by a factor of six) that the SEC must have gone back and carefully checked to see if anyone had been building up a position beforehand.
Liang got in on most of the big percentage moves of the last few years: Mannkind, Momenta, Pharmacyclics and many others, all small companies whose stocks saw some major action in both directions. If you want more details, here's the SEC complaint (PDF). It's a blueprint for getting caught, I should add. The various friend-and-family brokerage accounts mostly listed Liang's phone numbers as contact information, and almost always transferred money to an account held by Liang and his wife. The trading was done (one account right after the other) from IP addresses associated with his home account or voice lines billed to his name - this for accounts like the one ostensibly held by his 84-year-old mother back in China. Honestly, ten minutes after the SEC got suspicious about this guy and started checking him out, they must have known that they had him by the valuable body parts. It was really just a matter of time - well, time and greed.
Interestingly, Liang worked for the FDA for ten years before he seems to have decided to cash in. It would be interesting to know what went on, but my guess is that it's a familiar story. I think that he watched these decisions being made, watched the stocks jump around, thought about the profits to be made, and didn't act on those desires. Until one day he finally did - and nothing happened. So he probably told himself that he got away with it that time, and really shouldn't do that again for fear of getting caught - until he did it again, and didn't get caught. By this time, from the accounts you read of people in such situations, the hook is well and truly set. There may be a few people who are philosophical enough to take a set amount of money and walk away, but I'll bet that they're mighty scarce compared to the number of people who can't keep themselves from riding the train until, to their surprise, it suddenly pulls into a station.
+ TrackBacks (0) | Category: Business and Markets | Regulatory Affairs | The Dark Side
March 29, 2011
Man, am I getting all kinds of comments (here and by e-mail) about my views on modeling, QSAR, and the like. I thought it might be helpful for me to clarify my position on these things.
First off, structure. It's a valuable thing to have. My comments on the recent Nature Reviews Drug Discovery article were not meant to suggest otherwise, just to point out that the set of examples the authors picked to make this point was (in my view) flawed. It's actually surprisingly hard to come up with good comparison sets that isolate the effect of having structural information on the success of drug discovery projects. There are too many variables, and too many of them aren't independent. But just because a question (does having structural information help, overall?) is hard to answer doesn't mean that the answer is "no".
As an aside, since I've talked here about my admiration for fragment-based approaches, my own opinion should have been pretty clear already. Doing fragment-based drug discovery without good structural information looks to be very hard indeed.
Now, that said, there's structure and there's structure. Like every other tool in our kit, this one can be used well or used poorly. I think that fragment projects (to pick one example) get a lot of bang-for-the-buck out of structural data, and at the opposite end of the scale are those projects that only get good X-ray data after they've sent their compound to the clinic. No, wait, let me take that back. In those cases, the structure did no good, but it also did no harm. At the true opposite end of the scale are the projects where having structural data actually slowed things down. That's not frequent, but it does happen. Sometimes you have solid data, but for one reason or another the X-ray isn't corresponding to what's happening in real life. And sometimes this kicks in when medicinal chemists try to make too much out of less compelling structural data, just because it's all they have.
Now for in silico techniques. I have a similar attitude towards modeling of all kinds, but at one further remove than physical structure data. That is, I think it can be used well or used poorly, but I think that (for various reasons) the chances of using it poorly are somewhat increased. One reason is that modeling can be very hard to do well, naturally. And at the same time, tools with which to model conformations, docking, and so on are pretty widely available, which leads to a fair amount of work from people who really don't know what they're doing. Another reason is that the validity of any given model is of limited scope, as is the case with any mental construct that we have about what our molecules are doing, whether we used a software package or waved our hands around in the air. The software-package version of some binding model is more likely to have a wider range of usefulness than the hand-waving one, but they'll both break down at some point as you explore a range of compounds.
The key then is to figure out as quickly as possible if the project you're working on would be enhanced by modeling, or if such modeling would be merely ornamental, or even harmful. And that's not always easy to do. Any reasonable model is going to need a few iterations to get up to speed, generally requiring some specific compounds to be made by the chemists, and if you're running a project, you have to decide how much effort is worth spending to do that. You don't want to end up endlessly trying to refine the model, but at the same time, that model could turn out to be very useful after a few more turns of the crank. Which way to go? The same decisions apply, naturally, to the folks standing in front of the hoods, even without any modeling. How many more compounds are worth making in a given series? Would that effort be better used somewhere else? These calls are why we're paid the approximation of the big bucks.
So, while I don't think that modeling is an invariable boon to a project, neither do I think it's a waste of time. Sometimes it's one, and sometimes it's the other, and most of the time it's a mix of each - just like ideas at the bench. When modeling works, it can be a real help in sending the chemists down a productive path. On the other hand, you can certainly run a whole project with no modeling at all, just good old-fashioned analoging from the labs. It's the job of modelers to make the first possibility more likely and more attractive, and the job of the chemists and project managers to be open to that (and to be ready to emphasize or de-emphasize things as they develop).
This point of view seems reasonable to me (which is why I hold it!) But it also exposes me to complaints from people at both ends of the spectrum. I'm a lot more skeptical of in silico approaches than are many true believers, but I don't want to make the mistake of dismissing them outright.
+ TrackBacks (0) | Category: In Silico
Here's an interesting funding opportunity from NIH:
Recent advances in neuroscience offer unprecedented opportunities to discover new treatments for nervous system disorders. However, most promising compounds identified through basic research are not sufficiently drug-like for human testing. Before a new chemical entity can be tested in a clinical setting, it must undergo a process of chemical optimization to improve potency, selectivity, and drug-likeness, followed by pre-clinical safety testing to meet the standards set by the Food and Drug Administration (FDA) for clinical testing. These activities are largely the domain of the pharmaceutical industry and contract research organizations, and the necessary expertise and resources are not commonly available to academic researchers.
To enable drug development by the neuroscience community, the NIH Blueprint for Neuroscience Research is establishing a ‘virtual pharma’ network of contract service providers and consultants with extensive industry experience. This Funding Opportunity Announcement (FOA) is soliciting applications for U01 cooperative agreement awards from investigators with small molecule compounds that could be developed into clinical candidates within this network. This program intends to develop drugs from medicinal chemistry optimization through Phase I clinical testing and facilitate industry partnerships for their subsequent development. By initiating development of up to 20 new small-molecule compounds over two years (seven projects were launched in 2011), we anticipate that approximately four compounds will enter Phase 1 clinical trials within this program.
My first thought is that I'd like to e-mail that first paragraph to Marcia Angell and to all the people who keep telling me that NIH discovers most of the drugs on the market. (And as crazy as that sounds, I still keep running into people who are convinced that that's one of those established facts that Everyone Knows). My second thought is that this is worth doing, especially for targeting small or unusual diseases. There could well be interesting chemical matter or assay ideas floating around out there, looking for the proper environment to have something made of them.
My third thought, though, is that this could well end up being a real education for some of the participants. Four Phase I compounds out of twenty development candidates - it's hard to say if that's optimistic or not, because the criteria for something to be considered a development candidate can be slippery. And that goes for the drug industry too, I hasten to add. Different organizations have different ideas about what kinds of compounds are worth taking to the clinic, and those criteria vary by disease area, too. (Sad to say, they can also vary by time of the year and the degree to which bonuses are tied to hitting number-of-clinical-candidate goals, and anyone who's been around the business a while will have seen that happen, to their regret).
It'll be interesting to see how many people apply for this; the criteria look pretty steep to me:
Applicants must have available small-molecule compounds with strong evidence of disease-related activity and the potential for optimization through iterative medicinal chemistry. Applicants must also be able to conduct bioactivity and efficacy testing to assess compounds synthesized in the development process and provide all pre-clinical validation for the desired disease indication. . .This initiative is not intended to support development of new bioactivity assays, thus the applicant must have in hand well-characterized assays and models.
Hey, there are small companies out there that don't come up to that standard. To clarify, though, the document does say that "Evaluation of the approach should focus primarily on the rationale and strengths/weaknesses of proposed bioactivity studies and compound "druggability," since all other drug development work (e.g., medicinal chemistry, PK/tox, phase I clinical testing) will be designed and implemented by NIH-provided consultants and contractors after award", which must come as something of a relief.
What's interesting to me, though, is that the earlier version of this RFA (from lsat year) had the following language:
The ultimate goals of this Neurotherapeutics Grand Challenge are to produce at least one novel and effective drug for a nervous system disorder that is currently poorly treated and to catalyze industry interest in novel disease targets by demonstrating early-stage success.
That's missing this time around, which is a good thing. If they're really hoping for a drug to come out of four Phase I candidates in poorly-treated CNS disorders, then I'd advise them to keep that thought well hidden. The overall attrition rate in the clinic in CNS is somewhere around (and maybe north of) 90%, and if you're going to go after the tough end of that field it's going to be even steeper.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | The Central Nervous System
March 28, 2011
A friend on the computational/structural side of the business sent along this article from Nature Reviews Drug Discovery. The authors are looking through the Thomson database at drug targets that are the subject of active research in the industry, and comparing the ones that have structural information available to the ones that don't: enzyme targets (with high-resolution structures) and and GPCRs without it. They're trying to to see if structural data is worth enough to show up in the success rates (and thus the valuations) of the resulting projects.
Overall, the Thomson database has over a thousand projects in it from these two groups, a bit over 600 from the structure-enabled enzymes and just under 500 GPCR projects. What they found was that 70% of the projects in the GPCR category were listed as "suspended" or "discontinued", but only 44% of the enzyme projects were so listed. In order to correct for probability of success across different targets, the authors picked ten targets from each group that have led, overall, to similar numbers of launched drugs. Looking at the progress of the two groups, the structure-enabled projects are again lower in the "stopped" categories, with corresponding increases in discovery and the various clinical phases.
You have to go to the supplementary info for the targets themselves, but here they are: for the enzymes, it's DPP-IV, BCR-ABL, HER2 kinase, renin, Factor Xa, HDAC, HIV integrase, JAK2, Hep C protease, and cathepsin K. For the receptor projects, the list is endothelin A receptor, P2Y12, CXCR4, angiogensin II receptor, sphingosine-1-phosphate receptor, NK1, muscarinic M1, vasopressin V2, melatonin receptor, and adenosine A2A.
Looking over these, though, I think that the situation is more complicated than the authors have presented. For example, DPP-IV has good structural information now, but that's not how people got into the area. The cyanopyrrolidine class of inhibitors, which really jump-started the field, were made by analogy to a reported class of prolyl endopeptidase inhibitors (BOMCL 1996, p. 1163). Three years later, the most well-characterized Novartis compound in the series was being studied by classic enzymology techniques, because it still wasn't possible to say just how it was binding. But even more to the point, this is a well-trodden area now. Any DPP-IV project that's going on now is piggybacking not only on structural information, but on an awful lot of known SAR and toxicology.
And look at renin. That's been a target forever, structure or not. And it's safe to say that it wasn't lack of structural information that was holding the area back, nor was it the presence of it that got a compound finally through the clinic. You can say the same things about Factor Xa. The target was validated by naturally occurring peptides, and developed in various series by classical SAR. The X-ray structure of one of the first solid drug candidates in the area (rivaroxaban) bound to its target, came after the compound had been identified and the SAR had been optimized. Factor Xa efforts going on now also are standing on the shoulders of an awful lot of work.
In the case of histone deacetylase, the first launched drug in that category (SAHA, vorinostat) has already been identified before any sort of X-ray structure was available. Overall, that target is an interesting addition to the list, since there are actually a whole series of them, some of which have structural information and some of which don't. The big difficulty in that area is that we don't really know what the various roles of the different isoforms are, and thus how the profiles of different compounds might translate to the clinic, so I wouldn't say that structural data is helping with the rate-determining steps in the field.
On the receptor side, I also wouldn't say that it's lack of structural information that's necessarily holding things back in all of those cases, either. Take muscarinic M1 - muscarinic ligands have been known for a zillion years. That encompasses fairly selective antagonists, and hardly-selective-at-all agonists, so I'm not sure which class the authors intended. If they're talking about antagonists, then there are plenty already known. And if they're talking about agonists, I doubt that even detailed structural information would help, given the size of the native ligand (acetylcholine).
And the vasopressin V2 case is similar to some of the enzyme ones, in that there's already an approved drug in this category (tolvaptan), with several others in the same structural class chasing it. Then you have the adenosine A2A field, where long lists of agonists and antagonists have been found over the years, structure or not. The problem there has been finding a clinical use for them; all sorts of indications have been chased over the years, a problem that structural information would have not helped with in the least.
Now, it's true that there are projects in these categories where structure has helped out quite a bit, and it's also true that detailed GPCR structures would be welcome (and are slowly coming along, for that matter). I'm not denying either of those. But what does strike me is that there are so many confounding variables in this particular comparison, especially among the specific targets that are the subject of the article's featured graphic, that I just don't think that its conclusions follow.
+ TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico
March 25, 2011
The Supreme Court came down with a decision the other day (Matrixx Initiatives v. Siracusano) that the headlines say will have an impact on the drug industry. Looking at it, though, I don't see how anything's changed.
The silly-named Matrixx is the company that made Zicam, the zinc-based over-the-counter cold remedy that was such a big seller a few years back. You may or may not remember what brought it down - reports that some people suffered irreversible loss of their sense of smell after using the product. That's a steep price to pay for what may or may not have been any benefit at all (I never found the zinc-for-colds data very convincing, not that there were a lot of hard numbers to begin with).
This case grew out of a shareholder lawsuit, which alleged (as shareholder lawsuits do) that the company knew that there was trouble coming and had insufficiently informed its investors in time to keep them from losing buckets of their money. To get a little more specific about it, the suit claimed that Matrixx had received at least a dozen reports of anosmia between 1999 and 2003, but had said nothing about them - and more to the point, had continued to make positive statements about Zicam the whole way. The suit alleges that these statements were, therefore, false and misleading.
And that's what sent this case up the legal ladder, eventually to the big leagues of the Supreme Court. At what point does a company have an obligation to report such adverse events to the public and to its shareholders? Matrixx contended that the bar was statistical significance, and that anything short of that was not a "material event" that had to be addressed, but the Court explicitly shut that down in their decision:
"Matrixx’s premise that statistical significance is the only reliable indication of causation is flawed. Both medical experts and the Food and Drug Administration rely on evidence other than statistically significant data to establish an inference of causation. It thus stands to reason that reasonable investors would act on such evidence. Because adverse reports can take many forms, assessing their materiality is a fact-specific inquiry, requiring consideration of their source, content, and context. . .
Assuming the complaint’s allegations to be true, Matrixx received reports from medical experts and researchers that plausibly indicated a reliable causal link between Zicam and anosmia. Consumers likely would have viewed Zicam’s risk as substantially outweighing its benefit. Viewing the complaint’s allegations as a whole, the complaint alleges facts suggesting a significant risk to the commercial viability of Matrixx’s leading product. It is substantially likely that a reasonable investor would have viewed this information “ ‘as having significantly altered the “total mix” of information made available.’ "
I think that's a completely reasonable way of looking at the situation. (Note: that "total mix" language is from an earlier decision, Basic, Inc. v. Levinson, that also dealt with disclosure of material information). The other issue in this case is what the law calls scienter, broadly defined as "intent to deceive". As the decision explains, this can be assumed to hold when a reasonable person would find it as good an explanation of a defendant's actions as any other that could be drawn. And in this case, since Zicam was Matrixx's entire reason to exist, and since a link with permanent damage to a customer's sense of smell would surely damage sales immensely (which is exactly what happened), a reasonable person would indeed find that the company had a willingness to keep such information quiet.
But here's the puzzling part - not the Court's decision, which is short, clear, and unanimous, but the press coverage. This is being headlined as a defeat for Big Pharma, but I don't see it. We'll leave aside the fact that Matrixx is not exactly Big Pharma, although I'm sure that they were, for a while, making the Big Money selling Zicam. No, the thing is, this decision leaves things exactly as they were before. (Nature's "Great Beyond" blog has it exactly right).
It's not like statistical significance was the cutoff for press-releasing adverse events before, and now the Supreme Court has yanked that away. No, Matrixx was trying to raise the bar up to that point, and the Court wasn't having it. "The materiality of adverse event reports cannot be reduced to a bright-line rule", the decision says, and there was no such rule before. The Court, in fact, had explicitly refused another attempt to make such a rule in that Basic case mentioned above. No, Matrixx really had a very slim chance of prevailing in this one; current practice and legal precedent were both against them. As far as I can tell, the Court granted certiorari in this case just to nail that down one more time, which should (one hopes) keep this line of argument from popping up again any time soon.
By the way, if you've never looked at a Supreme Court decision, let me recommend them as interesting material for your idle hours. They can make very good reading, and are often (though not invariably!) well-written and enjoyable, even for non-lawyers. I don't exactly have them on my RSS feed (do they have one?), but when there's an interesting topic being decided, I've never regretted going to the actual text of the decision rather than only letting someone else tell me what it means.
+ TrackBacks (0) | Category: Business and Markets | Regulatory Affairs | Toxicology
March 24, 2011
I wanted to do some follow-up on the Makena story - the longtime progesterone ester drug that has now been newly FDA-approved and newly made two order of magnitude more expensive. (That earlier post has the details, for those who might not have been following).
Steve Usdin at BioCentury has, in the newsletter's March 21st issue, gone into some more detail about the whole process where KV Pharmaceuticals stepped in under the Orphan Drug Act to pick up exclusive marketing rights to the drug. The company, he says, "arguably has played a marginal role" in getting the drug back onto the market.
Here's the timeline, from that article and some digging around of my own: in 1956, Squibb got FDA approval for the exact compound (progesterone caproate) for the exact indication (preventing preterm labor), under the brand name Delalutin. But at that time, the FDA didn't require proof of efficacy, just safety. There were several small, inconclusive academic studies during the 1960s. In 1971, the FDA noted that the drug was effective for abnormal uterine bleeding and other indications, and was "probably effective" for preventing preterm delivery. In 1973, though, based on further data from the company, the agency went back on that statement, and said that there was now evidence of birth defects from the use of Delalutin in pregnant women, and removed any of these as approved uses. In the late 1970s, warning language was further added. In 1989, the agency said that its earlier concerns (heart and limb defects) were unfounded, but warned of others. By 1999, the FDA had concluded that progesterone drugs were too varied in their effects to be covered under a single set of warnings, and took the warning labels off.
In 1998, the National Institute of Child Health and Human Development launched a larger, controlled study, but this was an example of bad coordination all the way. By this time, Bristol-Myers Squibb had requested that Delalutin's NDAs be revoked, saying that they hadn't even sold the compound for several years. This seems to have also been a move, though, in response to FDA complaints about earlier violations of manufacturing guidelines and a request to recall the outstanding stocks of the drug. So the NICHD study was terminated after a year, with no results, and the drug's NDA was revoked as of September, 2000.
The NICHD had started another study by then, however, although I'm not sure how they solved their supply problems. This is the one that reported data in 2003, and showed a real statistical benefit for preterm labor. More physicians began to prescribe the drug, and in 2008, the American College of Obstetricians and Gynecologists recommended its use.
So much for the medical efficacy side of the story. Now we get back to the regulatory and marketing end of things. In March of 2006, a company called CUSTOpharm asked the FDA to determine if the drug had been withdrawn for reasons of safety or efficacy - basically, was it something that could be resubmitted as an ANDA? The agency determined that the compound was so eligible.
Meanwhile, another company called Adeza Biomedical was moving in the same direction (as far as I can tell, they and CUSTOpharm had nothing to do with each other, but I don't have all the details). Adeza submitted an NDA in July 2006, under the FDA's provision for using data that that applicant had not generated - in fact, they used the NICHD study results. They called the compound Gestiva, and asked for accelerated approval, since preterm delivery was accepted as a surrogate for infant mortality. An advisory committee recommended this in August of 2006, by a 12 to 9 vote. (Scroll down to the bottom of this page for the details).
The agency sent Adeza an "approvable" letter in October 2006 which asked for more animal studies. The next year, Adeza was bought by Cytec, who were bought by Hologic, who sold the Gestiva rights to KV Pharmaceuticals in January 2008. So that's how KV enters the story: they bought the drug program from someone who bought it from someone who just used a government agency's clinical data.
The NDA was approved by the FDA in February 2011, along with a name change to Makena. By this time, KV and Hologic had modified their agreement - KV had already paid up nearly $80 million, with another $12.5 million due with the approval, and has further payments to make to Hologic which would take the total purchase price up to nearly $200 million. That's been their main expense for the drug, by far. The FDA has asked them to continue two ongoing studies of Makena - one placebo-controlled trial to look at neonatal mortality and morbidity, and one observational study to see if there are any later developmental effects. Those studies will report in late 2016, and KV has said that their costs will be in the "tens of millions". So they paid more for the rights to Makena than it's costing them to get it studied in the clinic.
That only makes sense if they can charge a lot more than the generic price for the drug had been, of course, and that's what takes us up to today, with the uproar over the company's proposed price tag of $1500 per treatment. But the St. Louis Post-Dispatch (thanks to FiercePharma for the link) says that the company has now filed its latest 10-Q with the SEC, and is notifying investors that its pricing plans are in doubt:
The success of the Company’s commercialization of Makena™ is dependent upon a number of factors, including: (i) the Company’s ability to maintain certain net pricing levels for Makena™; (ii) successfully obtaining agreements for coverage and reimbursement rates on behalf of patients and medical practitioners prescribing Makena™ with third-party payors, including government authorities, private health insurers and other organizations, such as HMOs, insurance companies, and Medicaid programs and administrators, and (iii) the extent to which pharmaceutical compounders continue to produce non-FDA approved purported substitute product. The Company has been criticized regarding the list pricing of Makena™ in a number of news articles and internet postings. In addition, the Company has received, and expects to continue to receive, letters criticizing the Company’s list pricing of Makena™ from several medical practitioners and several advocacy groups, including the March of Dimes, American College of Obstetricians and Gynecologists, American Academy of Pediatrics and the Society for Maternal Fetal Medicine. Further, the Company has received one letter from a United States Senator and expect to receive another letter from a number of members of the United States Congress asking the Company to reduce its indicated pricing of Makena™, and the same Senator, together with a second Senator, has sent a letter to the Federal Trade Commission asking the agency to initiate an investigation of our pricing of Makena™.
The Company is responding to these criticisms and events in a number of respects. . .The success of the Company is largely dependent upon these efforts and appropriately responding to both the media and governmental concerns regarding the pricing of Makena™.
Personally, I'm torn a bit by the whole situation. I think that people and companies have the right to charge what the market will bear for their goods and services. But at the same time, I find myself also very irritated by KV in this case, because I truly think that they are taking advantage of the regulatory framework. As I said in the last post, it's not like they took on much risk here - they didn't discover this drug, didn't do the key clinical work on it, and don't even manufacture it themselves. Their business plan involves sitting back and collecting the rent, but that's what the law allows them to do.
In the end, if political pressure forces them to back down on their pricing, this will come down to a poor business decision. Companies should, in fact, charge what the market will bear - but KV may have neglected some other factors when they calculated what that price should be. Before setting a price, you should ask "Will the insurance companies pay?" and "Will Medicare pay?" and "Will people pay out of their own pocket?", but you should also ask "Will this price bring down so much controversy that we won't be able to make it stick?"
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Prices | Regulatory Affairs | Why Everyone Loves Us
March 23, 2011
The topic of lab sabotage has come up here now and again. And while there are some documented cases, I agree with Chemjobber that these stories are often more in the realm of legend. He'd trying to bring some of them into the light, though, by offering a valuable prize to the most interesting and well-attested story of deliberate action that he can find. If you know of any, go for the glory!
+ TrackBacks (0) | Category: The Dark Side
I managed to do a whole post on medical/pharma cranks without mentioning one of the biggest factors of all. As many people pointed out in the comments, look out for any therapy that makes a big point of being "all-natural".
There are several interesting mental attitudes behind the success of that marketing ploy. One of them is the appeal to primitivism. I'm reading Jacques Barzun's From Dawn to Decadence, and that's one of the persistent philosophical currents he identified in Western culture. Back to the basics! Shed the corrupting influences of modern life! In medical terms, this shows up as a constellation of beliefs: that people were truly healthier back in the good old days, that, correspondingly, there's something about modern civilization that's making us all sick, and that remedies for such ills are not to be found not among the fruits of that industrial civilization. Why would they? It's like a drunk reaching for an eye-opener to cure a hangover, right? No, you want to go back to the simple, natural remedies, because only those can cancel out what's been done to you.
I should mention up front that these beliefs are not totally insane. One of the things that I took away from an earlier book that I recommended here, A Farewell to Alms, is that life expectancies and general human health actually took a bit of a dive as cities began to grow in importance. Dietary and sanitary standards were lower for the mass of people in London, say, than they were for the farmers in the countryside, and it showed. And even today, some of the less-developed countries are in even worse shape than they were before the modern world ran into them.
But those aren't the customers for pricey natural remedy come-ons, are they? No, those go to well-off first-worlders with disposable income and high life expectancies. Industrial and urban civilization, although it got off to a pretty dirty start, has in fact led to a great upsurge in human health and productivity. And that's given people the time and wherewithal to respond to ads on their large flat-screen TVs or their satellite radios, and to pay money for shaken vials of distilled water or ground-up plants shipped from the other side of the planet.
Speaking of those ground-up plants reminds me of one more mental attitude. Among people who are big herbal medicine believers, there can be a sort of teleology, a view of the world as if it were more rationally constructed than I think it is. I've seen people asking questions like "I have Condition Y, what's the herb for that?" This every-disease-has-a-plant-for-it view is quite odd to me, because I don't see any reason why it should possibly be true. Plants make medicinally active substances for reasons of their own, and they only overlap with our needs once in a while. And for that matter, most of the really active compounds found in nature are things that will mess you up, rather than help you, just like most of the really active compounds made by humans. There are simply more ways for our biochemistries to be interfered with than for them to be improved.
+ TrackBacks (0) | Category: Snake Oil
March 22, 2011
I get probably more than my share of come-ons for various wonder-healing potions. For some reason, people see that I talk about drug discovery and think that I'm sure to be interested in homeopathic wonder water, magnetic healotronic belt buckles, or what have you. I am not. Well, at least not in the usual way that they're presented, as Great New Discoveries that I can order right now, first month's supply is free, and so on.
I also get to hear about many of these things at second hand, from people who write to me about them wondering if there's anything to them. And while I delete the press releases and advertisements, I respond to genuinely curious individuals, and I try to do so civilly. I tell them that no, according to what I know about chemistry, medicine, biology, and such, this things that they're describing won't (or shouldn't) work. I ask what kind of data might be available to back things up, and point out that in my own line of work we have to generate huge amounts of it before we believe we're on to something, and so on. I also try to get across how hard drug discovery really is, and how unlikely it is that there's going to be a Big Honking Breakthrough! every year or so, no matter what the ads on the radio say.
There are repeated themes in these things, and I'm by no means the first to notice them. Anything that promises to "boost your immune system", for example, is automatically suspect. Given what the immune system's capable of when cranked up a bit, I'd rather keep mine at its current setting, thanks. Of course, "detoxifying" is an instant red flag. As crank-watchers know, the conviction that we'd all be in perfect health if it weren't for insidious toxins is a widely held one, and a widely played-upon one. A corollary belief is that these toxins are piled up somewhere in your body, waiting for the right hand on the flush valve to clear them out and restore you to health.
Anything involving the word "energy" when applied to general medical concerns is worth a suspicious look. It's not an invariable sign of hand-waving, but it's common enough. This sort of language runs from the vague "gives you more energy" promises at one end to the mystical-life-forces stuff at the other. And related to that last part, appeals to Ancient Wisdom That We Have Forsaken are almost instant grounds for disqualification. Displacing the burden of proof in time (centuries ago!) or in space (the Mystic East) does not inspire confidence.
Naturally, as in any field, intimations of conspiracy are instant red flags. My friends, the Powers That Be don't want you to learn these wonderful things (but for $39.95, as it happens, you can hear about them until you're dizzy). Appeals to things that most people know of but don't understand well are worth scrutiny (most anything involving magnets, e.g.), as are attempts to make everything seem incredibly simple (Vinegar! The wonder-working key to health!)
In fact, what seems to be missing from most crank medical come-ons is, oddly enough, humility. There are no package inserts detailing side effects or symptoms to watch out for. There are no thoughts that any new data might sweep the latest discovery aside, and rarely any nods to others who have come before. No, this latest therapy is presented like a religious revelation - here it is, what you've been waiting for, and you'll never need anything else. Those of us who are trying to be on the other side should remember this, and try as much as we can not to sound like the people we can't stand. . .
+ TrackBacks (0) | Category: Snake Oil
For the chemists out there in the crowd: have you been looking for a paper to read that's filled, beginning to end, with good, solid, old-fashioned medicinal chemistry? Look no further than this one, on recent reports of isosteres. This sort of thing is still the heat of med-chem as it's practiced in the real world - messing around with the structure of an active molecule to see what you can improve and what you can get away with.
If you're not a medicinal chemist, the idea of a bioisostere is some chemical group that can substitute for another one. Classic examples are things like swapping in a tetrazole ring for a carboxylic acid or an oxadiazole for an ester. Here are some examples - even if your organic chemistry is shaky, you can see the similarities across these structures. If it works, you can change the other properties of your molecule (solubility, stability, selectivity) for the better while still keeping the key features that made the original group valuable for activity. It's not something that just automatically comes through every time - sometimes there just is no substitute - but it works enough of the time to be one of the essential techniques.
+ TrackBacks (0) | Category: Life in the Drug Labs
March 21, 2011
Here's a fascinating post from Bruce Booth on the R&D numbers for Big Pharma versus everyone else. If you had to guess, how much would you put big-company spending up against all the privately-financed startups? How many Lilliputians does it take to outweigh Gulliver?
Well, it turns out that the top 20 pharma companies spend about 26 times the budget of all the venture-backed companies put together. In fact, just comparing Pfizer's R&D budget alone to the universe of privately financed companies suggests that one Pfizer equals about 1000 small biotechs, or about 2-and-a-half times the number that exist today. Sheesh.
There are a lot of other interesting numbers to be found in that post - for example, given reasonable assumptions about facility costs, Big Pharma probably spends as much on its utility bills and building maintenance to fund the entire universe of VC-backed companies today. The whole thing looks very much like a steep power-law distribution to me, and that raises the question that Booth raises himself: how much more bang for the buck are we getting from the small companies, relative to the larger ones?
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
+ TrackBacks (0) | Category: Current Events
March 18, 2011
Management fads are truly a bad sign. I don't think that there's anyone out there in the working world who doesn't realize this, on some level, but it's worth keeping in mind. When some higher-up at your company decides "You know, we'd make a huge leap in productivity if we just did everything totally differently than we've ever done them before - I read this great article!", then you really need to hunker down until the fit passes.
Well, some of the folks at GlaxoSmithKline down in Research Triangle are probably looking for somewhere to hide. Because according to this article, the company is (yes!) at the forefront of a movement that's (yes!) sweeping the nation: open office space. No assigned desks, no permanent locations, just everyone floating around in a cloud of happy productivity. Jim Edwards at Bnet is right when he calls this "slightly insane".
Um. . .haven't we been hearing about this wonderful innovation for years now? And haven't several companies tried it and abandoned it, because (strangely enough) their employees didn't like the idea of putting their possessions into lockers every morning, wandering (or scrambling) around for desk space, and being unable to leave the slightest sign of anything personal around their work area? Here are some tempting details:
All employees are assigned a storage unit where they can keep files, a keyboard, a power pack and a mouse. There will also be group storage spaces where files that need to be accessed by more than one person can be kept. Any files that are not accessed regularly will be stored off-site. GSK's document retention policy isn't changing; it just may end up being followed more closely.
Gosh, that does sound like what I've been yearning for all these years. Making the transition to this wonderful environment isn't easy, though:
The larger move will ultimately include an extensive education campaign to prepare employees for their new surroundings.
Employees will work in neighborhoods, each of which includes meeting rooms and quiet areas. They'll attend etiquette workshops, and each neighborhood will adopt a set of policies to deal with hypothetical situations that may come up.
The groups that are moving to the new layout are those whose managers embraced the change. (Admin Shelby) Bryant now sits at a desk directly across from her boss, David Bishop, GSK's director of site operations in RTP.
Bishop said as the move gets closer, more and more departments are expressing interest in unchaining themselves from their desk.
"I don't believe we will ever get to where everybody wants it," he said.
Maybe not! But that'll be their loss, won't it, not having to go through all that education, and attend those etiquette workshops, and then throw out all their stuff. Honestly, I think I'd rather chew on glass than attend a series of workplace etiquette seminars and get re-educated by someone who tells me that I'm not going to have a desk any more. And those meetings to set behavior policies, those will be delightfully excruciating, for sure. What on earth is the company thinking?
Well, they're thinking about how this will allow them to vacate several buildings, because housing the employees this way takes up less room. So once again, this conforms to a rule that has seldom let me down: any question that starts out with "I wonder how come they. . ." can be answered with the word "Money".
+ TrackBacks (0) | Category: Business and Markets | Life in the Drug Labs
March 17, 2011
Phillip Ball had an interesting piece recently over at Nature News, which touches on a subject that I've also thought about: when does metaphorical thinking help, and when does it hurt? (I've got a whole category on the blog on this topic, although I haven't filled it up with as many posts as I've meant to).
As he mentions, there's no some empirical evidence that metaphors can influence the way we think about a situation, and not in ways that we're consciously aware of. I think that we're particularly vulnerable to this effect in scientific research, because so many of our concerns are outside day-to-day experience. I don't see any way around this: we can't see a G-protein-coupled receptor in action, so we come up with a mental picture to help. We can't visualize the complexities of a biochemical pathway in toto, so we reduce it to a useful simplification. Well, we hope it's useful, anyway.
I have a number of these mental constructs - for example, when I'm picturing a protein surface or a binding pocket, I have a tactile image of something like firm gelatin with a hard surface underneath (ball bearings or pool balls, depending on the scale I'm picturing). Thinking about it, I know where that image comes from - it's from standard molecular graphics representations of van der Waals surfaces around atoms. The charge distributions on the surface come across to me mentally as warm and cold areas, or perhaps sour and sweet. The first of those is probably because many graphics programs represent charges as red and blue; the taste metaphor seems to be my own brain's contribution - characteristically vivid, but of uncertain utility.
In case you're wondering, I do audio, too. Protein-protein surfaces seem, in my mind's eye, to be mildly sticky, which is probably my impression of an overall hydrophobic effect. The charged surfaces, when they come apart, do so in my head with a tactile peeling effect and a faint sound of Velcro.
Now, does my forebrain's special effects budget make me a better medicinal chemist? Who knows? If I've got the wrong impressions, and if I act on them too strongly, they might make me a worse one. The same with other metaphors, both the internal ones and the ones we produce for others. A bad metaphor can do more harm to the people you're trying to teach than good.
This also goes for the metaphors that people bring with them when they think about what we do in drug discovery. I think, for example, that people who design and build complex human-produced systems are prone (naturally enough) to believing that biochemistry and drug design must be similar processes, and thus subject to the same engineering approaches. Those of us wrestling with these problems are stuck trying to explain that not only are living systems more complex, they're complex in a different way as well - you're looking at differences of both degree and kind. But if you're used to circuit diagrams, programming flow charts, or chip design, then you're naturally going to see those when you look at diagrams of biochemical pathways.
The best "harmful metaphor" example I can think of at the moment is the importation of agonist/antagonist nomenclature into the nuclear receptor field. I'd like to find whoever did that and whack them on the head with a board. That misled me when I first started working in the area, and I've seen it mislead countless others since then. "They're pretty much like GPCRs" is the impression given to the unwary, but that's a tall glass containing 5% refreshment and 95% toxic sludge. You have to spend a lot of energy getting it out of your head if you want to have a chance to understand what's going on with those targets, inasmuch as anyone does.
But there may be a larger example: the whole reductionist approach of target-driven drug discovery. That'll be the subject of another post. . .
+ TrackBacks (0) | Category: Metaphors, Good and Bad
March 16, 2011
As had been rumored, Novartis seems to be drastically cutting back on their site in Horsham, UK. Respiratory research will continue there, but the manufacturing center seems to be out, with a loss of over 500 jobs. . .
The past few years have been bad ones for this industry, but on a per capita basis, it's probably been worse in the UK than anywhere else.
+ TrackBacks (0) | Category: Business and Markets
So Pfizer has announced that their antibacterial research is moving to the Shanghai site. Is this the first example of a large/traditional therapeutic area moving to China? And if it is, should we care? After all, there are Swiss, German, British, and Japanese companies, among others, with multinational research sites. Some programs run at one facility, and some at another. When you add China to that list, though, something happens for a lot of people.
That's because the Chinese sites got their start as the inexpensive way to offshore work, for one thing. But Shanghai's not as cheap as it used to be - it's still less expensive than doing the work in the US or western Europe, but the cost advantage is eroding. Another factor is that you don't see companies expanding into new therapeutic areas these days, so much as moving the existing ones around. In that zero-sum game, expanding one site means contracting another.
Here's something to think about, though: does Pfizer's choice here represent a calculation about some future opportunity in China, should they be able to develop any drugs? Would the "discovered and developed in Shanghai" factor help with the regulatory authorities there?
+ TrackBacks (0) | Category: Business and Markets | Infectious Diseases
Well, the nuclear crisis in Japan seems to be causing a run on potassium iodide (KI), and not just in Japan. If news reports are to be believed, people in many other regions (such as the west coast of the US and Canada) are stocking up, and some of these people may have already started dosing themselves.
Don't do that. Don't do it, for several reasons. First, as the chemists and biologists in this site's readership can tell you, it's not like KI is some sort of broad-spectrum anti-radiation pill. It can protect people against the effects of radioactive iodine-131, which is a major fission product from uranium. It does that by basically swamping out the radioactive iodine a person might have been exposed to, keeping it from being taken up into the body. Iodine tends to localize in the thyroid gland, and that uptake and local concentration is the real problem. An unfolded newspaper will shield you just fine from the alpha particles that I-131 gives off, but not if it's giving them off from inside your thyroid. Correction: I-131 is a beta/gamma emitter - my apologies! The point about not wanting it in your thyroid, of course, stands. . .
And this is why potassium iodide won't do a thing to help with the other radioactive isotopes found in nuclear reactors. That includes both the uranium and/or plutonium fuel, as well as the fission products like strontium-90 and radioactive cesium. Strontium-90 is a real problem, since it tends to concentrate in the bones (and teeth), and it has a much longer half-life than I-131. Unfortunately, calcium is so ubiquitous in the body that it's not feasible to do that uptake-blocking trick the way you can with iodide. The only effective way to deal with strontium-90 is to not get exposed to it.
Another good reason not to take KI pills is that unless you're actually being exposed to radioactive iodine, it's not going to do any good at all, and can actually do you harm. Pregnant women and people with thyroid problems, especially, should not go around gulping potassium iodide. Nothing radioactive is reaching North America yet - there's the Pacific Ocean to dilute things out along the way - which makes it very likely that more people on this side are in the process of injuring themselves by taking large unnecessary doses of iodide. This is like watching people swerve their cars off the road into the trees because they've heard that there's an accident fifty miles ahead.
Now, if I were in Japan and downwind of the Fukushima reactors, I would indeed be taking potassium iodide pills, and doing so while getting the hell out of the area. (That last part, when feasible, is the absolute best protection against radioactive exposure). But here in North America, we're already the hell out of the area. The only time to take KI pills is when a plume of radioactive iodine is on the way, and that's not the case over here. We'll have plenty of notice if anything like that happens, believe me - any event that dumps enough radioactivity to make it to California will be very noticeable indeed. Let's hope we don't see anything of the kind - and in the meantime, spare a thought for those reactor technicians who are trying to keep such things from happening. Those people, I hope, will eventually have statues raised to them.
+ TrackBacks (0) | Category: Current Events | Toxicology
March 15, 2011
Just a quick note that the Japanese chemist I mentioned a couple of days ago, my old colleague Masanori Yamaura, has reported in. He and his family made it through the quake (he reports that his labs are pretty well trashed, though), and they're now evacuating Iwaki City due to the nuclear plant problems up the coast. A bit of good news, at a time when there isn't a whole bunch of it around.
+ TrackBacks (0) | Category: Current Events
The coverage of the Japanese reactor situation reminds me of the coverage of many other technical issues when they overlap with serious breaking news stories. I wrote a little on this subject a few years ago, talking about the Merck/Vioxx business, but I wanted to expand on it.
I'm not going to rant on about the popular press not understanding this or that scientific or technical issue. There are more systemic problems with the way that news is reported, and in the way that we take it in. I'm not sure of what to do about them other than to be aware of them, but that's an important step right there.
The first of these is narrative bias. Reporters like to relay stories (and the rest of us like to hear stories) that have a progression. They have a beginning, a middle, and an end, the way our most popular novels and movies do. Something starts, something happens, something ends. Real life sometimes conforms to this template, but sometimes it doesn't. For example, some situations don't start, so much as they suddenly get noticed after they've been there all along. And some don't end, so much as they just stop having attention paid to them.
Another narrative-bias problem is the tendency to assign participants in any event to recognizable categories: good guys and bad guys, for starters. Moving to finer distinctions, there's Plucky Young X, Suffering Y, Salt-of-the-Earth Z, along with Untrustworthy Spokesman A, Obfuscating B, Crusading C, and the whole crowd. Mentally, we tend to assign people to such categories, especially if we don't know them personally, and it makes it easier for reporters, too. It's a team effort. The problem is, of course, that not everyone fits into a recognizable category, and many others overlap in ways that a simple narrative structure won't accommodate. Most real people are capable (more or less simultaneously) of great and venal actions, of heroism and cowardice, of altuism and selfishness.
Even when events are progressing in some sort of recognizable way, they're seldom doing that at the tempo that we'd like. This is the problem of temporal bias. They're especially unlikely to do that at the tempo that various news organizations would like. A cable news network would like to have something new to report every fifteen or twenty minutes; a newspaper would like something every day. But events happen when they happen, which means that in the absence of anything new to report or talk about, a tremendous amount of wind is generated to make it appear as if something is actually going on.
Our sense of history reinforces this bias. We compress and even out timelines. Look, say, at the start of World War II. Yep, Hitler invades Poland. Then he invades France. Dunkirk, Rotterdam, Battle of Britain, here we go. But there was a big gap in there, the so-called "Phoney War", where nothing much happened (at least, not compared to the way things started happening afterwards). We sort of edit that out, mentally, but it was a long period to the people living it at the time. A 24-hour news outlet would have had a rough time of it.
As an aside, a large, complex, and relatively well-documented event such as the Second World War (and the common knowledge that people have about it) furnishes all sorts of illustrations of the various forms of cognitive bias. Not so many people these days, unless they're history buffs, are aware of lacunae such as the Phoney War, out-of-the-spotlight actions such as the Battle of Madagascar, roads-not-taken such as the shipload of mustard gas that sank at Bari, or tragic mistakes such as the Cap Arcona incident. These and many other parts of the record have been sanded down or paved over, not by any conspiracy, but by natural human tendencies.
I find, getting back to the Japanese situation, that I'm getting more useful information from blogs and even the Wikipedia pages on the Fukushima incidents than I'm getting from primary news sources. Those tend to have jumbled timelines, unclear sourcing, and all sorts of overlap and garble. Reading the efforts of various other people who are trying to make sense of it all (and checking them against each other) is so far providing me with more useful information. My television is turned off.
+ TrackBacks (0) | Category: Current Events
As everyone who follows the industry knows, Pfizer has spent the last twenty years just getting bigger and bigger. Not that they haven't shed people, buildings, and whole research sites - have they ever - but they've shed those resources after buying them first. And as everyone who follows the industry knows, Pfizer's own labs have, either through bad luck or something more systemic, been rather unproductive during that same period. And now Lipitor moves ever closer to its patent expiration. What to do?
Well, this post by Matthew Herper at Forbes has one analyst's answer, and it might just be what Pfizer's CEO is thinking as well. It's something new, all right: get smaller.
Bernstein Pharmaceuticals analyst Tim Anderson has a note out this morning suggesting that Pfizer could sell, spin off, or otherwise divest divisions accounting for $32 billion of its $67 billion in sales, reinventing itself as a pure pharmaceutical research firm like Eli Lilly, Bristol-Myers Squibb, or AstraZeneca.
“We recently met with Pfizer’s new CEO Ian Read, and had we not heard it firsthand, we might not have appreciated just how serious he is about potentially splitting up the company,” Anderson writes. He goes on to say that Pfizer may shrink its revenue base by 40%, leaving behind only what Read calls the “innovative core."
The more cynical among you might be saying "Where this innovative core, eh?", but hear the guy out. He's talking about ditching all of Pfizer's non-pharma assets, and cutting back to. . .discovering drugs. Combine that with the recent cutbacks in various therapeutic areas, and you have a Pfizer that's actually turning its back on the strategy of the last two decades. Bigger, as it turns out, has not been better. Who knew?
Well, a lot of people, for sure. I've been complaining about it, genius that I am, for years now, but I'm sure not alone. It's interesting to see someone at the top, though, who's willing to admit this and to act on it. If he does, though, it'll be impossible not to wonder what might have been if the company hadn't made the big round trip through all those acquisitions. The core pharma assets that they're thinking about cutting back to are the pieces and hunks of a lot of other companies, whose people and departments have been shaken and jerked around something fierce. What shape would they be in if they hadn't been Pfizerized? We'll never know.
+ TrackBacks (0) | Category: Business and Markets | Drug Industry History
March 14, 2011
Like everyone else, I spent the weekend following the events in Japan. A great many organic chemists have or have had Japanese colleagues; it's a field with a strong history in that country. I've heard from several people, but one of my former colleagues is still in the "unknown" category: Masanori Yamaura, of Iwaki Meisei University.
Iwaki, unfortunately, was hit pretty hard, and it's not that far away from the Fukushima reactor complex. So things are pretty chaotic up there, to say the least, and I'm sure that a great many people in the area remain unaccounted for.
As far as the reactors go, from what can be figured out at this distance it doesn't look like they're going to do anything Chernobylish - seawater and boric acid should forestall that. But the only reason you'd pump that mix into your reactor cores is if a meltdown is the only alternative; that's surely going to turn them into nothing but massive cleanup sites for many years to come. It's also going to take a mighty amount of generating capacity offline for good, which is another long-term problem. But for the moment, when the good news is that your primary containment vessels are still intact, then you know that you have a pretty full schedule.
I've often thought that if intelligent aliens looked over the planet's population centers, they'd ask us what the heck we thought we were doing when we developed Japan, coastal California, and a number of other areas. But here we are.
Update: many of you may have seen this link already. It's a clear-headed explanation of what seems to be going on (and going wrong) inside the Japanese reactors, with links to other useful sites. By the way, I agree with the comments that one of the other long-lasting bad effects of this crisis is the damage it will do to the idea of using nuclear power.
+ TrackBacks (0) | Category: Current Events
March 11, 2011
The situation with KV Pharmaceuticals and the premature birth therapy Makena has been all over the news in the last couple of days. Briefly, Makena is an injectable progesterone formulation, given to women at risk of delivering prematurely. It went off the market in the early 1990s, because of side effect concerns and worries about overall efficacy, but since 2003 it's made an off-label comeback, thanks largely to a study at Wake Forest. This seemed to tip the risk/benefit ratio over to the favorable side.
Comes now the FDA and the provisions for orphan drugs. There is an official program offering market exclusivity to companies that are willing to take up such non-approved therapies and give them the full clinical and regulatory treatment. The idea, which is well-intentioned, as so many ideas are, was to bring these things in from the cold and give them more medical, scientific, and legal standing as things that had been through the whole review process. And that's what KV did. But this system says nothing about what the price of the drug will be during the years of exclusivity, in the same way that the approval process for new drugs says nothing about what their price will be when they come to market.
KV has decided that the price will now be about $1500 per patient, as opposed to about $15 before under the off-label regime. The reaction has been exactly what one would expect, and why not? Here, then are some thoughts:
Unfortunately, this should not have come as a surprise. It seems to have, though. The news stories are full of quotes from patients, doctors, and insurance companies saying that they never saw this coming. Look, though, at what happened recently with colchicine. Same situation. Same price jump. Same outrage, understandably. As long as these same incentives exist, any no-name generic company that comes along to adopt an old therapy and bring it into the modern regulatory regime can be assumed to be planning to run the price up to what they think the market will bear. That's why they're going to the trouble.
KV seems to have guessed correctly about the price. You wouldn't think so, with a hundred-fold increase. And the news stories, as I say, are full of (understandably) angry quotes from people at the insurance companies who will now be asked to pay. But (as that NPR link in the first paragraph says), Aetna, outraged or not, is going to pony up. It's going to cost them $20 to $30 million per year, most of which is going to go directly to KV's bottom line, but they're going to pay. And the other big health insurance providers seem to be doing the same. Meanwhile, the company has announced a program to provide low-cost treatment to people without insurance. From what I can see, it looks like basically everyone who had access to the drug before will have it now, the main difference being that the payers with deeper pockets will now be getting hammered on by KV. This is not a nice way to run a business, and it's not something I would sleep well on after having done myself. But there it is.
How much is regulatory approval worth, anyway? That seems to be what we're really arguing about. After all, patients are getting the same drug, in the same formulation, dosed the same way as before. But now it's **FDA Approved**. For new substances, I think regulatory approval is worth quite a bit. There are all kinds of things that can go wrong. But how about drugs that have been dosed in humans for years? And already run through the equivalent of Phase II trials by other people? The main thing that's being added is some confirmation that yes, the dose that everyone's been using is about right, and yes, the effects that are being seen are, in fact, real. And that's not worthless, not at all - but how much is it worth, really? The agency itself seems to place a pretty high value on it - seven years of market exclusivity, to be exact, and we can see by example just what that goes for on the market.
This does the drug industry no good, either. We have a bad enough reputation as it is, wouldn't you think? What's irritating, to someone like me who works at a "find a new drug" type of company, is that these no-name generic outfits (KV in this case, URL Pharma for colchicine) are doing pretty much what critics of the industry think that we all do, all the time. That is, walk up to situations where other people have done a lot of the work, a good amount of it with public/NIH money, and step right in and profit. Now it's true that these companies have to basically run Phase II/Phase III trials to take the data to the FDA, and that's a significant amount of money. But their risks in doing so have been watered down immensely by the history of these drugs in the medical community. When a research company closes its eyes, holds its breath, and jumps into the clinic with a new molecule, that's one thing. And that's where those 90% failure rates come from. But the failure rate of drugs that have been used for years in human patients already, and already studied under clinical conditions, is not anything like 90%. Is it zero per cent? Has anyone failed yet, taking one of these old medications back to the FDA? Even once?
The company picked its target carefully. I will say this, that KV's trials have presumably clarified the question of whether progesterone therapy actually does help. You'd think that the 2003 study would have answered that, and as it turned out, it had. A review of the field in 2006 concluded that it was a worthwhile therapy, from a cost/benefit standpoint, as did another review in 2007. (Mind you, that wasn't at any $1500 a throw, was it?) But a Cochrane review from last year concluded that there still wasn't enough evidence to recommend the whole idea. And progesterone therapy doesn't seem to help with twin or tripletpregnancies or with some other gestational problems. No, the 2003 study seemed fairly strong, and has the greatest relevance to public health, so that's what the company went for. From one viewing angle, the system worked.
My take, though, is that as long as the regulatory environment is set to value FDA's stamp of approval for old drugs this highly, that people will continue to take advantage of it. You subsidize something; you're going to get it. Personally, I don't think that the balance is right, but I'm open to suggestion about what to do about it. A shorter period of market exclusivity would just mean, I think, that the prices go up even higher once a drug gets re-approved. Just throwing up our hands and letting all that old stuff stand is a possibility, but there may well still be some of these things that aren't as effective as we think, or aren't being dosed right, and we have to decide what the cost is of letting those situations stand.
Update: see also Alex Tabarrok's thoughts on the effects of the Orphan Drug Act in general.
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Prices | Regulatory Affairs | Why Everyone Loves Us
March 10, 2011
Congratulations to Human Genome Science (and their partners, GSK) for getting the first new lupus drug in 50 years through to approval. It has not been an easy road for Benlysta (belimumab), to put it lightly. Back in 2005, for example, the drug missed its endpoints in Phase II, and things didn't look good.
Further work in the clinic did eventually show a benefit for the antibody, which goes after the B-lymphocyte stimulator protein. But it's not a home run. The FDA advisory committee cleared the drug, but expressed concerns about how effective it really is. The placebo response rate in the trials was rather high, reflecting the difficult clinical presentation of lupus, and that certainly cut into the significance of the final numbers.
But in the end, it does seem to help, and there's been nothing else for so long, and thus approval. How long did all this take? HGS announced that they were starting work in the area back in 2000, a different world compared to today (and especially for them; just look at the long-term stock charts). Did it cost $43 million to develop, as we've been assured is a good estimate for such things? Are you kidding?
+ TrackBacks (0) | Category:
Bruce Booth over at Atlas Venture (a VC fund here in Cambridge) has been following the Light and Warburton drug-cost estimate with interest. And now he's got a form up on his site for people to enter their own estimates of the costs. Take a look - it's bound to come up with a number that's more in tune with reality! For one thing, he's actually asking people who have, you know, developed drugs. . .
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Prices
Taking on questions from all comers about the drug industry, here I am over at Reddit's "Ask Me Anything" section. Have a look if you like - the questions vary greatly, is all I'll say. . .
+ TrackBacks (0) | Category: Why Everyone Loves Us
March 9, 2011
What's going on over at Slate, anyway? So far this week we've been talking about that Timothy Noah article over there publicizing the bizarre Light and Warburton estimate for drug development. Now one of their house blogs erupts with a geyser of idiocy about the looming patent cliff in the industry:
So this sudden terrible problem has been obvious and on schedule for at least 10 years.
It honestly is that simple and that stupid. The pharmaceutical industry turned all its energy toward wringing as much money as possible out of the drugs it already had, and quit making any sort of plans that would lead to having a new (and, you know: medically useful) batch of drugs under patent in the future, when the patents on the old batch expired.
Now the pharmaceutical companies are laying off tens of thousands of workers because they are worried about their financial future, because although they are officially in the business of producing and selling drugs, they stopped producing drugs.
It goes on in that vein; in fact, it gets even more stupid. And the point isn't that someone wrote something like this, so much as that this reflects, I fear, what a lot of other people think. Writing this blog has exposed me to a lot of smart, interesting people, which is something I really enjoy. But it's also exposed me to a lot of troglodytes who have no idea of what they're talking about. And here we have another one. Unfortunately, if enough people believe something idiotic, those beliefs can have consequences.
Now, we can argue about pharma strategy, which we do all the time around here. Where to spend the time and money, which programs to push and which to walk away from - everyone's got their own opinions. But if the line you're pushing is that drug companies just haven't been doing any research at all for the last ten or twenty years. . .well, then you're a moron. On the evidence of this column, Slate's Tom Scocca is one, at best, and his piece is a waste of electrons.
For one thing, there actually have been a few drugs introduced over the last ten years or so. Not as many as we'd like, or as many as we were planning on, but still. And then there are the failures. I mean, I say a lot of nasty things about Pfizer here, for example, but we can list off the big drug projects that they've had die on them over the last few years. Same for Merck, for Novartis, for BMS and AZ and for everyone else.
Honestly, I really think that we should make a bigger deal out of clinical failures in this industry, so that people realize that (1) we're always trying to do something, (2) it doesn't always work, and (3) it costs a godawful amount of money. As it is now, no one outside of the industry really notices or remembers when the giant multi-year research programs go down in expensive flames. And that leaves the door open for knuckle-dragging stuff as quoted above, and for the fools who believe it.
+ TrackBacks (0) | Category: Business and Markets | Press Coverage
March 8, 2011
Here's a good article ("Academia Faces PhD Overload") via Genomeweb on the academic post-doc situation in the sciences, which we were last discussing here. (Thanks to Jonathan Gitlin on Twitter for noting it). That was in response to a Nature News piece calling for more "permanent postdoc" positions, which I doubted would actually happen.
But perhaps it is - take a look at this part:
Since there aren't enough tenure-track jobs for every PhD who has taken one, two, or even three-plus postdocs, "there's a finite number of postdocs who cannot anymore be a postdoc, and so they [often] stay at the same institution and become appointed to the research faculty," Chalkley says. As a result of the postdoc surplus, "the numbers in the research faculty ranks have increased in the last decade," he adds.
As research faculty are not eligible for tenure themselves, their positions depend largely on their PI, who generally is. Non-tenure-track faculty are "dependent upon the person running a lab and their funding," Chalkley says, adding that the risk for research faculty, who are "almost invariably on soft money," is real. For example, should a PI decide to move to another institution, he or she might be reluctant to take research faculty along; instead, he or she could save start-up funds for the new lab by hiring postdocs in place of research instructors.
With no practical solutions to the postdoc surplus problem on the horizon, Minnesota's Levitt predicts this hiring trend will persist for some time. "Every school is going to be hiring a higher and higher fraction of non-tenure-track [faculty]," he says.
But as the article says elsewhere, no one is claiming that this is going to be especially good for the people being hired under these circumstances, except as an alternative to being out on the sidewalk. One extra reason for this whole demographic difficulty (which has always been with us to some degree) was been the big increase in the NIH budget from 1998 to 2003, which led to a corresponding bulge in the population of grad students, and then of postdocs:
According to the National Science Foundation's most recent Survey of Earned Doctorates statistics, American institutions awarded 49,562 total doctorates in 2009 — the most ever reported by NSF — of which 25,836 were in the sciences. Of life sciences doctorate recipients who indicated definite post-graduation employment commitments in 2009, nearly three-quarters said they'd accepted postdoc appointments. In an InfoBrief report, NSF notes that "2009 marked the largest single-year increase in the proportion of doctorate recipients taking postdoc positions during the 2004-2009 period."
And this just in time for a whacking economic downturn, which has severely cut into the industrial job possibilities. Still, there's a discussion in this article on getting people to look outside academia for their future, but the attitude that I mentioned in my last post on this topic is still a problem:
Nearly half of all respondents to NYU's most recent annual postdoc satisfaction survey — 47 percent — indicated a career goal of becoming tenure-track faculty.
"I think there's a bigger need for information on jobs outside of academia," Micoli says. There's a growing awareness in the research community that PhDs who choose careers in industry or other academic alternatives are not failing as scientists — but that sentiment has not yet penetrated the walls of the ivory tower, he adds.
Hey, these days, landing a good industrial job is very far indeed from failing. . .
+ TrackBacks (0) | Category: Academia (vs. Industry)
Some nonplussed Pfizer employees have sent this item along to me. The company may have fewer employees, and fewer therapeutic areas, and fewer research sites - but at least now they have more helicopters. One step at a time.
+ TrackBacks (0) | Category: Business and Markets
One of the readers in the comments section to the last post noticed Rebecca Warburton trying to clarify that absurd $43-million-per-drug R&D figure. You'll find her response in the comments section to the Slate piece that brought this whole study so much attention. Says Warburton:
. . .Our estimate of $59 million is the median development (the “D” in R&D) cost per average drug, not just NMEs (new chemicals) and does not include basic research costs, for which there is no reasonable estimate available.
But that explanation won't wash, as some of the readers over at Slate noticed as well. If you read the Light and Warburton article itself, you find the authors talking about nothing but "R&D" all the way through. In the one section where they do start to make a distinction, they brush aside expenses for basic research, on the grounds that drug companies hardly do any:
Companies under pressure from quarterly reports have difficulty justifying long searches for breakthrough drugs to investors. . .Little company R&D is devoted to basic research. Although industry association reports, based on unverified numbers from its members, claim that companies invest on average 17–19 per cent of sales in R&D, the most authoritative data come from the long-standing survey by the US National Science Foundation (2003). Its data document that pharmaceutical firms invest 12.4 per cent of gross domestic sales on R&D. Of this, 18 per cent, or 2.4 per cent of sales, went to basic research. More detailed reports from the industry indicate the percentage of R&D going to basic research is even smaller, about 9.3 per cent (or 1.2 per cent of sales) (Light, 2006). Thus the net corporate investment in research to discover important new drugs is about 1.2 per cent of sales, not 17–19 per cent.
So no, claiming that the $43 million figure is only supposed to represent the "D" part of R&D is disingenuous. There's another line from this paper, quoting Marcia Angell, that I think gets to one of the roots of the problem with the way these authors have characterized drug research. Angell is quoted here with approval - everything she and Merril Goozner have to say is quoted with approval:
It is also unclear how far back one should go to count up the costs of discovery, given that often there are several strands of research that are pieced together. In Angell’s view, the critical step in ‘discovering’ a new drug is understanding how the disease works and finding one or two good targets of vulnerability in the defences of a disease for intervention. Basic research ‘is almost always carried out at universities or government research labs, either in this country or abroad’ (Angell, 2004, p. 23).
And there you have it. The critical step is understanding how the disease works, you see, and finding one or two good targets. By that definition, the vast amount of money that gets spent in the drug industry is then non-critical. This is a viewpoint that can only be held by someone who has never tried to discover a drug, or never held a serious conversation with anyone who has.
Let's poke a few holes in that worldview. First off, if we waited to "understand" diseases before trying to develop drugs for them, we'd hardly have a damned thing on the drugstore shelves. Look at Alzheimer's - the medical community is still having fist-waving arguments about its cause, while drug companies continue to sink piles of money into trying to treat it. (Almost all of which has gone down the tubes, I might add, and I helped flush some of it through myself, earlier in my career).
Then you have to find one or two good targets. Peachy! Where do you find those thingies, anyway? And how do you know that they're good targets? I wish that Marcia Angell, Donald Light, or Rebecca Warburton would let the rest of us in on those secrets. As it is, we have to take chances on some pretty tenuous stuff, and often the only way to find out if a target really has any connection to human health is to. . .well, to discover a drug candidate that hits it. And develop it, and get it through tox, and into humans, and through Phase I, and into Phase II, and more likely than not these days, into Phase III before you really find out if, you know, it was actually a good target. We pass on those results to the rest of the world at that point. But that doesn't count as research, apparently.
And how about the drugs that have been developed without good mechanisms or targets at all? Metformin, ezetimibe, rosiglitazone and pioglitazone: none of these had any detailed mechanisms worked out for them while the money was being spent to develop them. These are the sorts of things we do around here in between having meetings to decide what color the package should be, and right after we do that thing where we all jump around in rooms knee-deep in hundred-dollar bills. Exhausting stuff, that money-wading.
But what I'd really like to ask Light and Warburton about is this: if you do think that the Tufts/diMasi estimate is crap, why did you feel as if the antidote was more crap from the opposite direction? Honestly, I'd think that intelligent people of good will might be more interested in decreasing the total amount of crap out there instead. . .
+ TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Prices | Why Everyone Loves Us
March 7, 2011
Note: a follow-up post to this one can be found here.
I've had a deluge of emails asking me about this article from Slate on the costs of drug research. It's based on this recent publication from Donald Light and Rebecca Warburton in the London School of Economics journal Biosocieties, and it's well worth discussing.
But let's get a few things out of the way first. The paper is a case for the prosecution, not a dispassionate analysis. The authors have a great deal of contempt for the pharmaceutical industry, and are unwilling (or unable) to keep it from seeping into their prose. I'm tempted to reply in kind, but I'm supposed to be the scientist in this discussion. We'll see how well I manage.
Another thing to mention immediately is that this paper is, in fact, not at all worthless. In between the editorializing, they make some serious points, and most of these are about the 2003 Tufts (diMasi) estimate of drug development costs. This is the widely-cited $802 million figure, and the fact that it's widely cited is what seems to infuriate the authors of this paper the most.
Here are their problems with it: the Tufts study surveyed 24 large drug companies, of which 10 agreed to participate. (In other words, this is neither a random nor a comprehensive sample). The drugs used for the study numbers were supposed to be "self-originated", but since we don't know which drugs they were, it's impossible to check this. And since the companies reported their own numbers, these would be difficult to check, even if they were made available drug-by-drug (which they aren't). Nor can anyone be sure that variations in how companies assign costs to R&D haven't skewed the data as well. We may well be looking at the most expensive drugs of the whole sample; it's impossible to say.
All of these are legitimate objections - the Tufts numbers are just not transparent. Companies are not willing to completely spread their books out for outside observers, in any industry, so any of these estimates are going to be fuzzy. Light and Warburton go on to some accounting issues, specifically the cost-of-capital estimate that took their estimated cost for a new drug from 400 million to 800 million. That topic has been debated around this blog before, and it's important to break that argument into two parts.
The first one is whether it's appropriate to consider opportunity costs at all. I still say that it is, and I don't have much patience for the "argument from unfamiliarity". If you commit to some multi-year use of your money, you really are forgoing what you could have earned with it otherwise. You're giving it up - it's a cost, whether you're used to thinking of it that way or not. But the second part of the argument is, just how much could you have earned? The problem here is that the Tufts study assumes 11% returns, which is just not anywhere near realistic. Mind you, it's on the same order of fantasy as the returns that have been assumed in the past inside many pension plans, but we're going to be dealing with that problem for years to come, too. No, the Tufts opportunity cost numbers are just too high.
Then there's the tax situation. I am, I'm very happy to say, no expert on R&D tax accounting. But it's enough to say that there's arguing room about the effects of the various special tax provisions for expenditures in this area. And it's complicated greatly by different treatment in different part of the US and the world. The Tufts study does not reduce the gross costs of R&D by tax savings, while Light and Warburton argue otherwise. Among other points, they argue that the industry is trying to have it both ways - that cost-of-capital arguments make R&D expenditures look like a long-term investment, while for tax purposes, many of these are deductible each year as more of an ordinary business expense.
Fine, then - I'm in agreement, on general principles, with Light and Warburton when they say that the Tufts study estimates are hard to check and likely too high. But here's where we part company. Not content to make this point, the authors turn around and attempt to replace one shaky number with another. The latter part of their paper, to me, is one one attempt after another to push their own estimate of drug R&D costs into a world of fantasy. Their claim is that the median R&D cost for a new drug is about $43 million. This figure is wrong.
For example, they have total clinical trial and regulatory review time dropping (taken from this reference - note that Light and diMasi, lead author of the Tufts study, are already fighting it out in the letter section). But if that's true why isn't the total time from discovery to approval going down? I've been unable to find any evidence that it is, and my own experience certainly doesn't make me think that the process is going any faster.
The authors also claim that corporate R&D risks are much lower than reported. Here they indulge in some rhetoric that makes me wonder if they understand the process at all:
Reports by industry routinely claim that companies must test 5000-10000 compounds to discover one drug that eventually comes to market. Marcia Angell (2004) points out that these figures are mythic: they could say 20,000 and it would not matter much, because the initial high-speed computer screenings consume a small per cent of R&D costs. . .
The truth is, even a screen of 20,000 compounds is tiny. And those are real, physical, compounds, not "computer screenings". It's true, though, that high-throughput screening is a small part of R&D costs. But the authors are mixing up screening and the synthesis of new compounds. We don't find our drug candidates in the screening deck - at least, not in any project I've worked on since 1989. We find leads there, and then people like me make all kinds of new structures - in flasks, dang it, not on computers - and we test those. Here, read this.
The authors go on to say:
Many products that 'fail' would be more accurately described as 'withdrawn', usually because trial results are mixed; or because a company estimates that the drug will not meet their high sales threshold for sufficient profitability. The difference between 'failure' and 'withdrawal' is important, because many observers suspect that companies withdraw or abandon therapeutically important drugs for commercial reasons. . .
Bring out some of those observers, then! And bring on the list of therapeutically important drugs that have been dropped out of the clinic just for commercial reasons. Please, give us some examples to work with here, and tell me how the disappointing data that the companies reported at the time (missed endpoints, tox problems) were fudged. Now, I have seen a compound fall out of actual production because of commercial reasons (Pfizer's Exubera), but that was partly because it didn't turn out to be as therapeutically important as the company convinced itself that it would be.
And here's another part I especially like:
Company financial risk is not only much lower than usually conveyed by the '1 in 5000' rhetoric, but companies spread their risks over a number of projects. The larger companies are, and the more they merge with or buy up other companies, the less risk they bear for any one R&D project. The corporate risk of R&D for companies like Pfizer or GlaxoSmithKinen are thus lower than for companies like Intel that have only a few innovations on which sales rely.
Well, then. That means that Pfizer, as the biggest and most-merged-up drug company in the world, must have minimized its risk more than anyone in the industry. Right? And they should be doing just fine by that? Not laying people off right and left? Not closing any huge research sites? Not wondering frantically how they're going to replace the lost revenue from Lipitor? Not telling people that they're actually ditching several therapeutic areas completely because they don't think than can compete in them, given the risks? Not announcing a stock buyback program, because they apparently (and rather shamefully) think that's a better use of their money than putting it back into more R&D? I mean, how can Intel be doing better than that? It's almost like chip design is a different sort of R&D business entirely.
Well, this post is already too long, and there's more to discuss in another one, at least. But I wanted to add one more argument from economic reality, an extension of those little questions about Pfizer. If the cost of R&D for a new drug really were $43 million, as Light and Warburton would have it, and the financial and tax advantages so great, why isn't everyone pouring money into the drug industry? Why aren't VC firms lining up to get in on this sweet deal? I mean, $43 million for a drug, you should be able to raise that pretty easily, even in this climate - and then you just stand back as the money gushes into the sky. Don't you?
Why are drug approval rates so flat (or worse?) Why all the layoffs? Why all the doom and gloom? We're apparently doing great, and we never even knew.
+ TrackBacks (0) | Category: Business and Markets | Clinical Trials | Drug Development | Drug Industry History | Drug Prices | Why Everyone Loves Us
March 4, 2011
One of the side topics that's come up around here recently is the value of a scientific background in other jobs (and for life in general). I've thought about that for some time. Growing up, I was always interesting in science, and I was always experimenting with things. I went through cycles of messing around in my spare time with the microscope, the telescope, chemistry experiments, electricity and radio, and back around again. I wasn't all that comprehensive and rigorous about any of it, but I think I did get the basic ideas of a scientist's world view.
Those, to me, are: (1) the natural world is independent of human thought. Your beliefs may be of interest to you, but the physical world is indifferent to them. (2) The natural world has rules. They may not be very clear, and they may be wildly complex, but there are rules, and they can be potentially figured out. (3) The way to figure them out, if you're so inclined, is to ask questions of the world in an organized fashion. These can be observations (in which case, the question is "I wonder what's there and what it looks like?"), or experiments ("I wonder what happens if I do this?"). And (4), since the world is so complex, you'd better make your questions as well-thought-out as possible. Try to identify all the variables you can, only mess with one of them at a time if at all possible, and value reproducibility very highly.
It's surprising, when you look at the record, to find out how little this view of the world has held sway over human history. There were various well-known outbreaks of such thinking in the past, but it's really only been a continuous effort in the last few centuries, and not everywhere in the world, by any means. (If you're interested in seeing just what a profound change has resulted in human affairs, I can recommend A Farewell to Alms. The results, for better or worse, we see around us, not least of which is the keyboard I'm using to type these thoughts, and the network that I'm going to send them out over in a few minutes.
So in one respect, a scientific outlook must be worth something, since it's the backdrop for the entire modern world. But it's possible, more than possible, to live in it without being aware of things in that way. I think that for any kind of work that requires brainpower and adaptability, a scientific background should come in handy. But how handy? That's my question for today. I know what I'd like the answer to be - but see that first principle above. The world doesn't have to give you the answers you like, or even care if you like one at all.
For some possible background, see the recent Edge.org question "What scientific concept would improve everybody's cognitive toolkit?". I was invited to contribute to this one as well, but wasn't able to put my thoughts in a coherent enough form.
Update: fixed the numbering of the points. Yessireebob, I'm a scientist, all right.
+ TrackBacks (0) | Category: Who Discovers and Why
March 3, 2011
Here's a call to make something different out of the postdoctoral position. Says Jennifer Rohn in Nature News:
". . .we should professionalize the postdoc role and turn it into a career rather than a scientific stepping stone.
Consider the scientific community as an ecosystem, and it is easy to see why postdocs need another path. The system needs only one replacement per lab-head position, but over the course of a 30–40-year career, a typical biologist will train dozens of suitable candidates for the position. The academic opportunities for a mature postdoc some ten years after completing his or her PhD are few and far between. . .
The scientific enterprise is run on what economists call the 'tournament' model, with practitioners pitted against one another in bitter pursuit of a very rare prize. Given that cheap and disposable trainees — PhD students and postdocs — fuel the entire scientific research enterprise, it is not surprising that few inside the system seem interested in change. . .Few academics could afford to warn trainees against entering the ring — if they frightened away their labour force, research would grind to a halt.
Her proposed solution is to reduce the numbers of people being trained as graduate students, and staff up some permanent non-lab-head research positions. We'll debate the merits of that idea in just a moment, but right off, I have a hard time seeing how this could (or would) ever be adopted. Basically, it's asking academic research departments to act against what they see as their own interests. Those relatively cheap workers that you bring in every year, push along, and move out the door? Why don't you replace them with more expensive people who never leave?
No, even if too many people are going through graduate programs, I think that the only way to see real changes is for the people responsible to believe that those changes are desirable - that they're something they want to do, something that's beneficial for them. If the current system can trundle along, taking in fresh students and excreting PhDs, then it probably will continue doing just that. The whole academic research system runs on bringing in grant money (and its overhead), and for that you need bodies in the lab. Bodies generate results, and results are what you need for grant renewals, which give you money to hire more bodies as the earlier crop leaves.
Leaves for what? Well, "when the rocket goes up, who cares where it comes down?" What the graduate students (and postdocs) go on to is, from the university's perspective, not really their problem. And that's why I don't see this proposal going anywhere: it's asking the academic research establishment to do something for the postdocs of the world, to which the answer will be an eloquent indifference.
OK, even if it's not going to happen, should it (in some other world)? Actually, in several labs I've known, it already does. I think many of us have seen "perpetual postdocs", people who just seem to hang around the labs forever, acting as right-hand-assistants to the boss. To be honest, I've always seen the situation these people are in as sort of sad, but compared to unemployment, I suppose not.
But that brings up another aspect of this proposal - its near-total academocentricity. Read it, and you'd never get the idea that there's anything outside the university research environment. The whole point of life is to become a lab head, bringing in the grant money and taking on graduate students. Right? This is the world view of someone who's been in academia too long (or at least bought too thoroughly into its culture). There are places to do research outside of the ivy-covered walls. Not as many of them as there were a few years ago, true, and that's another whopper of a problem, one that gets discussed around here with great frequency. The traditional answer to "I can't find a faculty position" has been "Go and find a job, then". If that part of the ecosystem is permanently broken, then post-docs have even more trouble than the Nature column is imagining. . .
+ TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets
March 2, 2011
+ TrackBacks (0) | Category: Life in the Drug Labs
Back in August, I noted that Mannkind - who have been developing an inhaled insulin product for many years now - had done a stock-swap deal with Seaside 88. That, I thought, was not a good sign. They're an investment group that I profiled (unfavorably) here, in reference to their dealings with Generex (another spray-insulin company, allegedly working on an oral delivery route).
Adam Feuerstein's the guy who put me on to Generex. (Last I heard, was getting sued by them for his comments, although his opinions seemed to me to be well justified. No updates on that, as far as I know). He's also recently updated the Mannkind situation, and it's not looking good. Last month the company fired about 40% of its workforce, and apparently has about enough cash on its books to make it to the end of the year. Its founder, Al Mann, has plowed a lot of his own money into the company, but on a recent conference call, he declined to say if he's going to put in any more. Mann is a real believer, and has given this his best shot. But it may not be enough.
The class-action suits are already fluttering through the air. And the bubbling tar pit that is spray-delivered insulin continues to churn.
+ TrackBacks (0) | Category: Business and Markets | Diabetes and Obesity
A reader who's attending the International Congress on Heterocyclic Chemistry in Glasgow later this year sent me a note about it. Like many such meetings, they have guidelines for presentation and poster abstracts. But this one was done by someone who's been around the block a few times.
The sample abstract is from a team from the University of Utopia (and in case you're wondering, ".ut" is apparently the internet domain for Utopia). And the authors, in a nice touch, are Black, Schwartz, Nero, Fekete, and Čzerný. (Too bad the other students in the group - Siyah, Dubh, and Musta - couldn't make it onto the list). But here's the text of the thing itself:
Two fundamentally different but complementary transition metal catalyzed chemo-, regio-,diastereo-, enantio-, and grantproposalo-selective approaches to the synthesis of a library of biologically significant nano- and pico-molecules will be presented with the focus on reaction mechanism and egocentric effects. The role of the nature of the metal, ligand, solvent, temperature, time, microwave, nanowave, picowave, ultrasound, hypersound, moon phase, and weather in this catalytic, sustainable, cost-effective, and eco-friendly technology will be discussed in detail.
If nothing else, that's about as grantproposalo-selective as it gets, right there. . .
+ TrackBacks (0) | Category: The Scientific Literature
March 1, 2011
Thanks to a reader, here's a committee of Parliament in the UK looking into the closure of the Pfizer site at Sandwich. The first part is mostly background on what shape the industry is in these days, then four executives from Pfizer come on at about the 16:02 mark. Many questions are asked about why Sandwich in particular, why Pfizer's doing what it's doing in general, when it was known that the site was going to close, and so on. I've dug through the hearing in several places, but haven't listened to the whole thing, but UK readers might wish to.
+ TrackBacks (0) | Category: Business and Markets
The Genentech/Roche drug Avastin has been in the news a lot lately, mostly about cost/benefit analysis for its uses in oncology. It's nobody's idea of a cheap drug even for those indications where it shows results. But there's one therapeutic area where it's actually the bargain alternative.
That's AMD, wet age-related macular degeneration. Stopping the growth of those leaking blood vessels in the eye is the standard therapy for the condition, so a VEGF-targeted therapy is just the thing. Lucentis is the anti-VEGF antibody that's approved for that use; it showed very impressive results in the clinic, and seems to perform just as well in the real world.
But Lucentis is expensive. And while it's different from Avastin, it's really not that different. It is, in fact, an opthalmic-delivery-optimized version of the same general antibody, and was developed by the same folks at Genentech. Avastin itself isn't packaged in units small enough for AMD therapy, but if you have a practice with a number of patients, well. . .by the time you split it out, an Avastin injection is about $50, versus nearly $2000 for Lucentis. In fact, a great many physicians in the US (possibly a majority) use Avastin off-label in just that fashion. A UK study last fall shored up that practice with some data, and a number of other studies are underway.
One of these, conducted by the NIH, should be reporting soon. And that's putting Roche/Genentech in an odd position. They have not supplied drugs for the trial, for one thing. Last fall the New York Times reported that rebates are now being offered to opthamologists if they'll use Lucentis, which many have interpreted as a preemptive maneuver to deal with the likely NIH results.
This is a mess, no doubt about it. While Genentech did indeed spend the time, money, and effort to develop Lucentis as a separate therapy, there seems to have been an active effort to avoid finding out if Avastin wouldn't have been just as good. The market does provide perverse incentives like this sometimes - this is an instance where I think that the NIH is doing exactly what it should be doing by running the head-to-head trial.
But I don't think that Roche is going to like the results. And they could find themselves arguing, simultaneously, that Avastin should not be used for AMD, even though it's cheaper than the alternatives and may well be just as effective, while Avastin should be used for metastatic breast cancer, even though it's more expensive than the alternatives and may well not be effective at all. And while the company will surely argue that the numbers are not what they appear, and that there are other numbers that say differently, and that it's all quite complex, they're going to be unable to escape the downward slice of Occam's razor: that in every case, they're arguing for the exact position that maximizes their revenue.
This is what companies do, of course. We shouldn't expect any less. But that doesn't mean that the revenue-maximizing path is always the right one, either.
+ TrackBacks (0) | Category: Clinical Trials | Drug Prices | Why Everyone Loves Us