About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
March 31, 2011
After my post the other day on the NIH neurological disease effort, I heard from Rebecca Farkas there, who's leading the medicinal chemistry effort on the program. She's glad to get feedback from people in the industry, and in fact is inviting questions and comments on the whole program. Contact her at farkasr-at-ninds-dot-nih-dotgov (perhaps putting the address in that form will give the spam filters at NIH a bit less to do than otherwise).
She also sends word that they'll be advertising soon for a Project Manager position for this effort, and is looking for suggestions on how to reach the right audience for a good selection of candidates. This post might help a bit, but she's interesting in suggestions on where to advertise and who to contact for good leads.
+ TrackBacks (0) | Category: Drug Development | The Central Nervous System
Venture-capital guy Bruce Booth has a provocative post, based on experience, about how reproducible those papers are that make you say "Someone should try to start a company around that stuff".
The unspoken rule is that at least 50% of the studies published even in top tier academic journals – Science, Nature, Cell, PNAS, etc… – can’t be repeated with the same conclusions by an industrial lab. In particular, key animal models often don’t reproduce. This 50% failure rate isn’t a data free assertion: it’s backed up by dozens of experienced R&D professionals who’ve participated in the (re)testing of academic findings. This is a huge problem for translational research and one that won’t go away until we address it head on.
Why such a high failure rate? Booth's own explanation is clearly the first one to take into account - that academic labs live by results. They live by publishable, high-impact-factor-journal results, grant-renewing tenure-application-supporting results. And it's not that there's a lot of deliberate faking going on (although there's always a bit of that to be found), as much as there is wishful thinking and running everything so that it seems to hang together just well enough to get the paper out. It's a temptation for everyone doing research, especially tricky cutting-edge stuff that fails a lot of the time anyway. Hey, it did work that time, so we know that it's real - those other times it didn't go so smoothly, well, we'll figure out what the problems were with those, but for now, let's just write this stuff up before we get scooped. . .
Even things that turn out to be (mostly) correct often aren't that reproducible, at least, not enough to start raising money for them. Booth's advice for people in that situation is to check things out very carefully. If the new technology is flaky enough that only a few people can get it to work, it's not ready for the bright lights yet.
He also has some interesting points on "academic bias" versus "pharma bias". You hear a lot about the latter, to the point that some people consider any work funded by the drug industry to be de facto tainted. But everyone has biases. Drug companies want to get compounds approved, and to sell lots of them once that happens. Academic labs want to get big, impressive publications and big, impressive grants. The consequences of industrial biaes and conflicts of interest can be larger, but if you're working back at the startup stage, you'd better keep an eye on the academic ones. We both have to watch ourselves.
Update: by request, here's a translation of this page in Romanian
+ TrackBacks (0) | Category: Academia (vs. Industry)
March 30, 2011
Most interesting - here's the FDA's latest statement on Makena, in response to KV Pharmaceuticals sending letters to compounding pharmacies telling them to stop providing the drug, now that they have regulatory approval and market exclusivity:
. . .Because Makena is a sterile injectable, where there is a risk of contamination, greater assurance of safety is provided by an approved product. However, under certain conditions, a licensed pharmacist may compound a drug product using ingredients that are components of FDA approved drugs if the compounding is for an identified individual patient based on a valid prescription for a compounded product that is necessary for that patient. FDA prioritizes enforcement actions related to compounded drugs using a risk-based approach, giving the highest enforcement priority to pharmacies that compound products that are causing harm or that amount to health fraud.
FDA understands that the manufacturer of Makena, KV Pharmaceuticals, has sent letters to pharmacists indicating that FDA will no longer exercise enforcement discretion with regard to compounded versions of Makena. This is not correct.
In order to support access to this important drug, at this time and under this unique situation, FDA does not intend to take enforcement action against pharmacies that compound hydroxyprogesterone caproate based on a valid prescription for an individually identified patient unless the compounded products are unsafe, of substandard quality, or are not being compounded in accordance with appropriate standards for compounding sterile products. As always, FDA may at any time revisit a decision to exercise enforcement discretion.
The agency does not quite make clear that the "unique situation" might be, although they do mention the amount of work done by NIH-funded researchers that was part of the approval package. The FDA has, of course, no authority on pricing - but they do have other means at their disposal, and this is one of them. KV must be wondering at this point what, exactly, the phrase "market exclusivity" might mean. (The answer, for better or worse, is that it, and other statuatory language, means whatever the regulatory authorities want it to mean, at least until something goes to the courts. Then it means whatever the courts want it to mean).
Overall, I think that this is a good thing, since (as I've said before) I think that the law in this case is providing a bit too much incentive, considering the relatively small risks involved in bringing progesterone caproate into the modern regulatory world. It worries me, though, that the FDA is making it so explicit that they plan to pick and choose which laws to enforce and how strictly they're going to enforce them. But honestly, it's always been this way, and a no-exceptions letter-of-the-law approach leads to craziness of its own. In this case, I think that clarifying the hazards of pushing things as hard as they can possibly be pushed will help make future business plans in this area a bit more realistic.
+ TrackBacks (0) | Category: Drug Prices | Regulatory Affairs
Ah, insider trading. It's the province of Wall Street types in really expensive shirts, right? Like in the movies? Well, read on.
Even the most clueless know that you're not supposed to trade on material nonpublic information, and the only really fuzzy part is what constitutes material information. A lawyer once told me that if you're an employee of a company, material information is "anything that makes you think about trading the stock". That's a pretty intelligent rule, and one that the recent Matrixx Supreme Court decision would seem to have reaffirmed. If someone could think it's nonpublic material information, odds are that it is.
In the drug business, the hottest potatoes in this category are the results of clinical trials and FDA decisions. People (a very short, well-defined, and well-paperworked list of people) inside a given company know the first news before anyone else, and people inside the FDA get to hear about the second. And there is no way that you can act on such information legally before it's released. Those tempted to try realize that, of course, and act accordingly.
They do, in fact, what Cheng Li Yiang (a chemist, regrettably) and his son Andrew Liang were accused yesterday of doing since 2006: they used the accounts of at least least seven other people to trade on knowledge of FDA approval decisions, pulling in over three million dollars in the process. The single biggest winner (over $1 million) appears to have been front-running the surprise approval of Vanda Pharmaceutical's Fanapt in 2009. It wouldn't surprise me if this was the one that blew up the whole business. That was such an unexpected move by the FDA (after which the stock went up by a factor of six) that the SEC must have gone back and carefully checked to see if anyone had been building up a position beforehand.
Liang got in on most of the big percentage moves of the last few years: Mannkind, Momenta, Pharmacyclics and many others, all small companies whose stocks saw some major action in both directions. If you want more details, here's the SEC complaint (PDF). It's a blueprint for getting caught, I should add. The various friend-and-family brokerage accounts mostly listed Liang's phone numbers as contact information, and almost always transferred money to an account held by Liang and his wife. The trading was done (one account right after the other) from IP addresses associated with his home account or voice lines billed to his name - this for accounts like the one ostensibly held by his 84-year-old mother back in China. Honestly, ten minutes after the SEC got suspicious about this guy and started checking him out, they must have known that they had him by the valuable body parts. It was really just a matter of time - well, time and greed.
Interestingly, Liang worked for the FDA for ten years before he seems to have decided to cash in. It would be interesting to know what went on, but my guess is that it's a familiar story. I think that he watched these decisions being made, watched the stocks jump around, thought about the profits to be made, and didn't act on those desires. Until one day he finally did - and nothing happened. So he probably told himself that he got away with it that time, and really shouldn't do that again for fear of getting caught - until he did it again, and didn't get caught. By this time, from the accounts you read of people in such situations, the hook is well and truly set. There may be a few people who are philosophical enough to take a set amount of money and walk away, but I'll bet that they're mighty scarce compared to the number of people who can't keep themselves from riding the train until, to their surprise, it suddenly pulls into a station.
+ TrackBacks (0) | Category: Business and Markets | Regulatory Affairs | The Dark Side
March 29, 2011
Man, am I getting all kinds of comments (here and by e-mail) about my views on modeling, QSAR, and the like. I thought it might be helpful for me to clarify my position on these things.
First off, structure. It's a valuable thing to have. My comments on the recent Nature Reviews Drug Discovery article were not meant to suggest otherwise, just to point out that the set of examples the authors picked to make this point was (in my view) flawed. It's actually surprisingly hard to come up with good comparison sets that isolate the effect of having structural information on the success of drug discovery projects. There are too many variables, and too many of them aren't independent. But just because a question (does having structural information help, overall?) is hard to answer doesn't mean that the answer is "no".
As an aside, since I've talked here about my admiration for fragment-based approaches, my own opinion should have been pretty clear already. Doing fragment-based drug discovery without good structural information looks to be very hard indeed.
Now, that said, there's structure and there's structure. Like every other tool in our kit, this one can be used well or used poorly. I think that fragment projects (to pick one example) get a lot of bang-for-the-buck out of structural data, and at the opposite end of the scale are those projects that only get good X-ray data after they've sent their compound to the clinic. No, wait, let me take that back. In those cases, the structure did no good, but it also did no harm. At the true opposite end of the scale are the projects where having structural data actually slowed things down. That's not frequent, but it does happen. Sometimes you have solid data, but for one reason or another the X-ray isn't corresponding to what's happening in real life. And sometimes this kicks in when medicinal chemists try to make too much out of less compelling structural data, just because it's all they have.
Now for in silico techniques. I have a similar attitude towards modeling of all kinds, but at one further remove than physical structure data. That is, I think it can be used well or used poorly, but I think that (for various reasons) the chances of using it poorly are somewhat increased. One reason is that modeling can be very hard to do well, naturally. And at the same time, tools with which to model conformations, docking, and so on are pretty widely available, which leads to a fair amount of work from people who really don't know what they're doing. Another reason is that the validity of any given model is of limited scope, as is the case with any mental construct that we have about what our molecules are doing, whether we used a software package or waved our hands around in the air. The software-package version of some binding model is more likely to have a wider range of usefulness than the hand-waving one, but they'll both break down at some point as you explore a range of compounds.
The key then is to figure out as quickly as possible if the project you're working on would be enhanced by modeling, or if such modeling would be merely ornamental, or even harmful. And that's not always easy to do. Any reasonable model is going to need a few iterations to get up to speed, generally requiring some specific compounds to be made by the chemists, and if you're running a project, you have to decide how much effort is worth spending to do that. You don't want to end up endlessly trying to refine the model, but at the same time, that model could turn out to be very useful after a few more turns of the crank. Which way to go? The same decisions apply, naturally, to the folks standing in front of the hoods, even without any modeling. How many more compounds are worth making in a given series? Would that effort be better used somewhere else? These calls are why we're paid the approximation of the big bucks.
So, while I don't think that modeling is an invariable boon to a project, neither do I think it's a waste of time. Sometimes it's one, and sometimes it's the other, and most of the time it's a mix of each - just like ideas at the bench. When modeling works, it can be a real help in sending the chemists down a productive path. On the other hand, you can certainly run a whole project with no modeling at all, just good old-fashioned analoging from the labs. It's the job of modelers to make the first possibility more likely and more attractive, and the job of the chemists and project managers to be open to that (and to be ready to emphasize or de-emphasize things as they develop).
This point of view seems reasonable to me (which is why I hold it!) But it also exposes me to complaints from people at both ends of the spectrum. I'm a lot more skeptical of in silico approaches than are many true believers, but I don't want to make the mistake of dismissing them outright.
+ TrackBacks (0) | Category: In Silico
Here's an interesting funding opportunity from NIH:
Recent advances in neuroscience offer unprecedented opportunities to discover new treatments for nervous system disorders. However, most promising compounds identified through basic research are not sufficiently drug-like for human testing. Before a new chemical entity can be tested in a clinical setting, it must undergo a process of chemical optimization to improve potency, selectivity, and drug-likeness, followed by pre-clinical safety testing to meet the standards set by the Food and Drug Administration (FDA) for clinical testing. These activities are largely the domain of the pharmaceutical industry and contract research organizations, and the necessary expertise and resources are not commonly available to academic researchers.
To enable drug development by the neuroscience community, the NIH Blueprint for Neuroscience Research is establishing a ‘virtual pharma’ network of contract service providers and consultants with extensive industry experience. This Funding Opportunity Announcement (FOA) is soliciting applications for U01 cooperative agreement awards from investigators with small molecule compounds that could be developed into clinical candidates within this network. This program intends to develop drugs from medicinal chemistry optimization through Phase I clinical testing and facilitate industry partnerships for their subsequent development. By initiating development of up to 20 new small-molecule compounds over two years (seven projects were launched in 2011), we anticipate that approximately four compounds will enter Phase 1 clinical trials within this program.
My first thought is that I'd like to e-mail that first paragraph to Marcia Angell and to all the people who keep telling me that NIH discovers most of the drugs on the market. (And as crazy as that sounds, I still keep running into people who are convinced that that's one of those established facts that Everyone Knows). My second thought is that this is worth doing, especially for targeting small or unusual diseases. There could well be interesting chemical matter or assay ideas floating around out there, looking for the proper environment to have something made of them.
My third thought, though, is that this could well end up being a real education for some of the participants. Four Phase I compounds out of twenty development candidates - it's hard to say if that's optimistic or not, because the criteria for something to be considered a development candidate can be slippery. And that goes for the drug industry too, I hasten to add. Different organizations have different ideas about what kinds of compounds are worth taking to the clinic, and those criteria vary by disease area, too. (Sad to say, they can also vary by time of the year and the degree to which bonuses are tied to hitting number-of-clinical-candidate goals, and anyone who's been around the business a while will have seen that happen, to their regret).
It'll be interesting to see how many people apply for this; the criteria look pretty steep to me:
Applicants must have available small-molecule compounds with strong evidence of disease-related activity and the potential for optimization through iterative medicinal chemistry. Applicants must also be able to conduct bioactivity and efficacy testing to assess compounds synthesized in the development process and provide all pre-clinical validation for the desired disease indication. . .This initiative is not intended to support development of new bioactivity assays, thus the applicant must have in hand well-characterized assays and models.
Hey, there are small companies out there that don't come up to that standard. To clarify, though, the document does say that "Evaluation of the approach should focus primarily on the rationale and strengths/weaknesses of proposed bioactivity studies and compound "druggability," since all other drug development work (e.g., medicinal chemistry, PK/tox, phase I clinical testing) will be designed and implemented by NIH-provided consultants and contractors after award", which must come as something of a relief.
What's interesting to me, though, is that the earlier version of this RFA (from lsat year) had the following language:
The ultimate goals of this Neurotherapeutics Grand Challenge are to produce at least one novel and effective drug for a nervous system disorder that is currently poorly treated and to catalyze industry interest in novel disease targets by demonstrating early-stage success.
That's missing this time around, which is a good thing. If they're really hoping for a drug to come out of four Phase I candidates in poorly-treated CNS disorders, then I'd advise them to keep that thought well hidden. The overall attrition rate in the clinic in CNS is somewhere around (and maybe north of) 90%, and if you're going to go after the tough end of that field it's going to be even steeper.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | The Central Nervous System
March 28, 2011
A friend on the computational/structural side of the business sent along this article from Nature Reviews Drug Discovery. The authors are looking through the Thomson database at drug targets that are the subject of active research in the industry, and comparing the ones that have structural information available to the ones that don't: enzyme targets (with high-resolution structures) and and GPCRs without it. They're trying to to see if structural data is worth enough to show up in the success rates (and thus the valuations) of the resulting projects.
Overall, the Thomson database has over a thousand projects in it from these two groups, a bit over 600 from the structure-enabled enzymes and just under 500 GPCR projects. What they found was that 70% of the projects in the GPCR category were listed as "suspended" or "discontinued", but only 44% of the enzyme projects were so listed. In order to correct for probability of success across different targets, the authors picked ten targets from each group that have led, overall, to similar numbers of launched drugs. Looking at the progress of the two groups, the structure-enabled projects are again lower in the "stopped" categories, with corresponding increases in discovery and the various clinical phases.
You have to go to the supplementary info for the targets themselves, but here they are: for the enzymes, it's DPP-IV, BCR-ABL, HER2 kinase, renin, Factor Xa, HDAC, HIV integrase, JAK2, Hep C protease, and cathepsin K. For the receptor projects, the list is endothelin A receptor, P2Y12, CXCR4, angiogensin II receptor, sphingosine-1-phosphate receptor, NK1, muscarinic M1, vasopressin V2, melatonin receptor, and adenosine A2A.
Looking over these, though, I think that the situation is more complicated than the authors have presented. For example, DPP-IV has good structural information now, but that's not how people got into the area. The cyanopyrrolidine class of inhibitors, which really jump-started the field, were made by analogy to a reported class of prolyl endopeptidase inhibitors (BOMCL 1996, p. 1163). Three years later, the most well-characterized Novartis compound in the series was being studied by classic enzymology techniques, because it still wasn't possible to say just how it was binding. But even more to the point, this is a well-trodden area now. Any DPP-IV project that's going on now is piggybacking not only on structural information, but on an awful lot of known SAR and toxicology.
And look at renin. That's been a target forever, structure or not. And it's safe to say that it wasn't lack of structural information that was holding the area back, nor was it the presence of it that got a compound finally through the clinic. You can say the same things about Factor Xa. The target was validated by naturally occurring peptides, and developed in various series by classical SAR. The X-ray structure of one of the first solid drug candidates in the area (rivaroxaban) bound to its target, came after the compound had been identified and the SAR had been optimized. Factor Xa efforts going on now also are standing on the shoulders of an awful lot of work.
In the case of histone deacetylase, the first launched drug in that category (SAHA, vorinostat) has already been identified before any sort of X-ray structure was available. Overall, that target is an interesting addition to the list, since there are actually a whole series of them, some of which have structural information and some of which don't. The big difficulty in that area is that we don't really know what the various roles of the different isoforms are, and thus how the profiles of different compounds might translate to the clinic, so I wouldn't say that structural data is helping with the rate-determining steps in the field.
On the receptor side, I also wouldn't say that it's lack of structural information that's necessarily holding things back in all of those cases, either. Take muscarinic M1 - muscarinic ligands have been known for a zillion years. That encompasses fairly selective antagonists, and hardly-selective-at-all agonists, so I'm not sure which class the authors intended. If they're talking about antagonists, then there are plenty already known. And if they're talking about agonists, I doubt that even detailed structural information would help, given the size of the native ligand (acetylcholine).
And the vasopressin V2 case is similar to some of the enzyme ones, in that there's already an approved drug in this category (tolvaptan), with several others in the same structural class chasing it. Then you have the adenosine A2A field, where long lists of agonists and antagonists have been found over the years, structure or not. The problem there has been finding a clinical use for them; all sorts of indications have been chased over the years, a problem that structural information would have not helped with in the least.
Now, it's true that there are projects in these categories where structure has helped out quite a bit, and it's also true that detailed GPCR structures would be welcome (and are slowly coming along, for that matter). I'm not denying either of those. But what does strike me is that there are so many confounding variables in this particular comparison, especially among the specific targets that are the subject of the article's featured graphic, that I just don't think that its conclusions follow.
+ TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico
March 25, 2011
The Supreme Court came down with a decision the other day (Matrixx Initiatives v. Siracusano) that the headlines say will have an impact on the drug industry. Looking at it, though, I don't see how anything's changed.
The silly-named Matrixx is the company that made Zicam, the zinc-based over-the-counter cold remedy that was such a big seller a few years back. You may or may not remember what brought it down - reports that some people suffered irreversible loss of their sense of smell after using the product. That's a steep price to pay for what may or may not have been any benefit at all (I never found the zinc-for-colds data very convincing, not that there were a lot of hard numbers to begin with).
This case grew out of a shareholder lawsuit, which alleged (as shareholder lawsuits do) that the company knew that there was trouble coming and had insufficiently informed its investors in time to keep them from losing buckets of their money. To get a little more specific about it, the suit claimed that Matrixx had received at least a dozen reports of anosmia between 1999 and 2003, but had said nothing about them - and more to the point, had continued to make positive statements about Zicam the whole way. The suit alleges that these statements were, therefore, false and misleading.
And that's what sent this case up the legal ladder, eventually to the big leagues of the Supreme Court. At what point does a company have an obligation to report such adverse events to the public and to its shareholders? Matrixx contended that the bar was statistical significance, and that anything short of that was not a "material event" that had to be addressed, but the Court explicitly shut that down in their decision:
"Matrixx’s premise that statistical significance is the only reliable indication of causation is flawed. Both medical experts and the Food and Drug Administration rely on evidence other than statistically significant data to establish an inference of causation. It thus stands to reason that reasonable investors would act on such evidence. Because adverse reports can take many forms, assessing their materiality is a fact-specific inquiry, requiring consideration of their source, content, and context. . .
Assuming the complaint’s allegations to be true, Matrixx received reports from medical experts and researchers that plausibly indicated a reliable causal link between Zicam and anosmia. Consumers likely would have viewed Zicam’s risk as substantially outweighing its benefit. Viewing the complaint’s allegations as a whole, the complaint alleges facts suggesting a significant risk to the commercial viability of Matrixx’s leading product. It is substantially likely that a reasonable investor would have viewed this information “ ‘as having significantly altered the “total mix” of information made available.’ "
I think that's a completely reasonable way of looking at the situation. (Note: that "total mix" language is from an earlier decision, Basic, Inc. v. Levinson, that also dealt with disclosure of material information). The other issue in this case is what the law calls scienter, broadly defined as "intent to deceive". As the decision explains, this can be assumed to hold when a reasonable person would find it as good an explanation of a defendant's actions as any other that could be drawn. And in this case, since Zicam was Matrixx's entire reason to exist, and since a link with permanent damage to a customer's sense of smell would surely damage sales immensely (which is exactly what happened), a reasonable person would indeed find that the company had a willingness to keep such information quiet.
But here's the puzzling part - not the Court's decision, which is short, clear, and unanimous, but the press coverage. This is being headlined as a defeat for Big Pharma, but I don't see it. We'll leave aside the fact that Matrixx is not exactly Big Pharma, although I'm sure that they were, for a while, making the Big Money selling Zicam. No, the thing is, this decision leaves things exactly as they were before. (Nature's "Great Beyond" blog has it exactly right).
It's not like statistical significance was the cutoff for press-releasing adverse events before, and now the Supreme Court has yanked that away. No, Matrixx was trying to raise the bar up to that point, and the Court wasn't having it. "The materiality of adverse event reports cannot be reduced to a bright-line rule", the decision says, and there was no such rule before. The Court, in fact, had explicitly refused another attempt to make such a rule in that Basic case mentioned above. No, Matrixx really had a very slim chance of prevailing in this one; current practice and legal precedent were both against them. As far as I can tell, the Court granted certiorari in this case just to nail that down one more time, which should (one hopes) keep this line of argument from popping up again any time soon.
By the way, if you've never looked at a Supreme Court decision, let me recommend them as interesting material for your idle hours. They can make very good reading, and are often (though not invariably!) well-written and enjoyable, even for non-lawyers. I don't exactly have them on my RSS feed (do they have one?), but when there's an interesting topic being decided, I've never regretted going to the actual text of the decision rather than only letting someone else tell me what it means.
+ TrackBacks (0) | Category: Business and Markets | Regulatory Affairs | Toxicology
March 24, 2011
I wanted to do some follow-up on the Makena story - the longtime progesterone ester drug that has now been newly FDA-approved and newly made two order of magnitude more expensive. (That earlier post has the details, for those who might not have been following).
Steve Usdin at BioCentury has, in the newsletter's March 21st issue, gone into some more detail about the whole process where KV Pharmaceuticals stepped in under the Orphan Drug Act to pick up exclusive marketing rights to the drug. The company, he says, "arguably has played a marginal role" in getting the drug back onto the market.
Here's the timeline, from that article and some digging around of my own: in 1956, Squibb got FDA approval for the exact compound (progesterone caproate) for the exact indication (preventing preterm labor), under the brand name Delalutin. But at that time, the FDA didn't require proof of efficacy, just safety. There were several small, inconclusive academic studies during the 1960s. In 1971, the FDA noted that the drug was effective for abnormal uterine bleeding and other indications, and was "probably effective" for preventing preterm delivery. In 1973, though, based on further data from the company, the agency went back on that statement, and said that there was now evidence of birth defects from the use of Delalutin in pregnant women, and removed any of these as approved uses. In the late 1970s, warning language was further added. In 1989, the agency said that its earlier concerns (heart and limb defects) were unfounded, but warned of others. By 1999, the FDA had concluded that progesterone drugs were too varied in their effects to be covered under a single set of warnings, and took the warning labels off.
In 1998, the National Institute of Child Health and Human Development launched a larger, controlled study, but this was an example of bad coordination all the way. By this time, Bristol-Myers Squibb had requested that Delalutin's NDAs be revoked, saying that they hadn't even sold the compound for several years. This seems to have also been a move, though, in response to FDA complaints about earlier violations of manufacturing guidelines and a request to recall the outstanding stocks of the drug. So the NICHD study was terminated after a year, with no results, and the drug's NDA was revoked as of September, 2000.
The NICHD had started another study by then, however, although I'm not sure how they solved their supply problems. This is the one that reported data in 2003, and showed a real statistical benefit for preterm labor. More physicians began to prescribe the drug, and in 2008, the American College of Obstetricians and Gynecologists recommended its use.
So much for the medical efficacy side of the story. Now we get back to the regulatory and marketing end of things. In March of 2006, a company called CUSTOpharm asked the FDA to determine if the drug had been withdrawn for reasons of safety or efficacy - basically, was it something that could be resubmitted as an ANDA? The agency determined that the compound was so eligible.
Meanwhile, another company called Adeza Biomedical was moving in the same direction (as far as I can tell, they and CUSTOpharm had nothing to do with each other, but I don't have all the details). Adeza submitted an NDA in July 2006, under the FDA's provision for using data that that applicant had not generated - in fact, they used the NICHD study results. They called the compound Gestiva, and asked for accelerated approval, since preterm delivery was accepted as a surrogate for infant mortality. An advisory committee recommended this in August of 2006, by a 12 to 9 vote. (Scroll down to the bottom of this page for the details).
The agency sent Adeza an "approvable" letter in October 2006 which asked for more animal studies. The next year, Adeza was bought by Cytec, who were bought by Hologic, who sold the Gestiva rights to KV Pharmaceuticals in January 2008. So that's how KV enters the story: they bought the drug program from someone who bought it from someone who just used a government agency's clinical data.
The NDA was approved by the FDA in February 2011, along with a name change to Makena. By this time, KV and Hologic had modified their agreement - KV had already paid up nearly $80 million, with another $12.5 million due with the approval, and has further payments to make to Hologic which would take the total purchase price up to nearly $200 million. That's been their main expense for the drug, by far. The FDA has asked them to continue two ongoing studies of Makena - one placebo-controlled trial to look at neonatal mortality and morbidity, and one observational study to see if there are any later developmental effects. Those studies will report in late 2016, and KV has said that their costs will be in the "tens of millions". So they paid more for the rights to Makena than it's costing them to get it studied in the clinic.
That only makes sense if they can charge a lot more than the generic price for the drug had been, of course, and that's what takes us up to today, with the uproar over the company's proposed price tag of $1500 per treatment. But the St. Louis Post-Dispatch (thanks to FiercePharma for the link) says that the company has now filed its latest 10-Q with the SEC, and is notifying investors that its pricing plans are in doubt:
The success of the Company’s commercialization of Makena™ is dependent upon a number o