Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

October 28, 2014

An Open-Source Cancer Pitch, Deconstructed

Email This Entry

Posted by Derek

I'm confused. Read this and see if you end up the same way. TechCrunch has the story of Isaac Yonemoto, who's crowdsourcing a project around a potential oncology compound. It's a derivative of sibiromycin, a compound I hadn't come across, but it seems that it was first studied in Russia, and then at Maryland. Yonemoto's own work on the compound is in this paper from 2012, which looks reasonable. (Here's more). And the crowdfunding pitch is also reasonable, in lay-audience terms:

The drug candidate 9DS was developed at the University of Maryland. The last work done on the drug showed that it had activity against cancer competitive with leading cancer drugs such as taxol. Moreover, 9DS is also likely to have lower side effects than most chemotherapies, since a related compound, SJG-136, seems to have low side effects in early clinical trials.

Project Marilyn involves: production of more 9DS, and submitting 9DS to a xenograft study ('curing cancer in mice'). This is the next step in drug development and an important one on the way to doing clinical (human) studies. The process we're seeking to fund should take approximately 6 months. If we recieve more funding, we will add stretch goals, such as further preclinical experiments on 9DS, development 9DS analogs, or other exciting anti-cancer ideas.

But here's where things begin to swerve off into different territory. Yonemoto isn't just talking about some preclinical spadework on yet another oncology compound (which is what the project actually is, as far as I can tell). He's pitching it in broader terms:

. . .Some drugs can cost upwards of $100,000 a year, bankrupting patients. This level of expense is simply unacceptable, especially since 1/3 of people will get cancer in their lifetime.

One solution to this problem is to develop unpatented drugs - pharmaceutical companies will have to sell them at a reasonable price. To those who believe that drugs cannot be made without patents we remind them:

When Salk and Sabin cured polio, they didn't patent the vaccine. It's time to develop a patent-free anticancer drug for the 21st century.

The software industry and the open-source movement have shown that patenting is not necessary for innovation. Releasing without a patent means the drugs will be cheaper and it will be easier to build on the work to make improved drugs or drug combinations. Releasing without a patent means expanded access to drugs in countries that can't afford extensive licensing and export agreements.

OK, let's take this one apart, piece by piece, in good old classic blogging style. Yes, some oncology drugs are indeed very expensive. This is more of a problem for insurance companies and governments, since they're paying nearly all of these costs, but the topic of drug prices in oncology has come up around here many times, and will do so again. It's especially worrisome for me that companies are already up close to the what-the-market-will-possibly-bear price with things that are not exactly transformative therapies (what pricing structure will those have?)

But are unpatented drugs the solution? It seems to me that pharmaceutical companies will not "have to sell them at a reasonable price". Rather, unpatented compounds will simply not become drugs. Yonemoto, like so many others who have not actually done drug development, is skipping over the longest, most difficult, and most expensive parts of the process. Readers of the crowdsourcing proposal might be forgiven if they don't pick up on this, but getting a compound to work in some mouse xenograft models does not turn it into a drug. Preparing a compound to go into human trials takes a lot more than that: a reliable scale-up route to the compound itself, toxicology studies, more detailed pharmacokinetic studies, formulation studies. This can't be done by a handful of people: a handful of people don't have the resources and expertise. And that's just setting the stage for the real thing: clinical trials in humans. That crowdsourcing proposal skates over it, big-time, but the truth is that the great majority of money in drug development is spent in the clinic. The amount of money Yonemoto is raising, which is appropriate for the studies he's planning, is a roundoff error in the calculations for a decent clinical campaign.

So who's going to do all that? A drug company. Are they going to take that on with an unpatented compound that they do not own? They are not. Another thing that a lay reader won't get from reading Yonemoto's proposal is that the failure rate for new oncology compounds in the clinic is at least 90%, and probably more like 95. If you are going to spend all that money developing compounds that don't make it, you will need to make some money when one of them finally does. If a compound has no chance of ever doing that, no one's even going to go down that road to start with.

Now we get to the Salk/Sabin patent example. There are plenty of persistent myths about the polio vaccine story (this book review at Technology Review is a good intro to the subject). Jonas Salk created one of the most enduring myths when he famously told Edward R. Murrow in an interview that "There is no patent. Would you patent the sun?". But the idea of patenting his injected, killed-virus vaccine had already been looked into, and lawyers had determined that any application would be invalidated by prior art. (Salk himself, in his late work on a possible HIV vaccine, did indeed file patent applications).

Sabin's oral attenuated-virus vaccine, on the other hand, was indeed deliberately never patented. But this does not shed much light on the patenting of drugs for cancer. The Sabin polio vaccine protected all comers after a single dose. The public health implications of a polio vaccine were obvious and immediate: polio was everywhere, and anyone could get it. But Yonemoto's 9SDS is not in that category: cancer is not a single disease like polio, and is not open to a single cure. Even if a sibiromycin derivative makes it to market (and they've been the subject of research for quite a while now), it will do what almost every other cancer drug does: help some people, to a degree, for a while. The exceptions are rare: patients who have a tumor type that is completely dependent on a particular mechanism, and that doesn't mutate away from that phenotype quickly enough. Most cancer patients aren't that fortunate.

So here's the rough part of cancer drug discovery: cancer, broadly speaking, is indeed a big public health issue. But we're not going to wipe it out the way the polio and smallpox vaccines wiped out their homogeneous diseases. Cancer isn't caused by a human-specific infectious agent that we can eliminate from the world. It crops up over and over again as our cells divide, in thousands of forms, and fighting it is going to take tremendous diagnostic skill and an array of hundreds of different therapies, most of which we haven't discovered yet. And money. Lots of money.

So when Yonemoto says that "The software industry and the open-source movement have shown that patenting is not necessary for innovation", he's comparing apples and iguanas. Drug discovery is not like coding, unfortunately: you're not going to have one person from San Jose pop up and add a chlorine atom to the molecule while another guy pulls an all-nighter in St. Louis and figures out the i.v. formulation for the rat tox experiments. The pitch on Indysci.org, which is really about doing some preliminary experiments, makes it sound like the opening trumpet of a drug discovery revolution and that it's going to lead to "releasing" a drug. That's disingenuous, to say the least. I wish Yonemoto luck, actually, but I think he's going to be running into some very high-density reality pretty soon.

Update: Yonemoto has added this to the comments section, and I appreciate him coming by:

"Thanks Derek! You've basically crystallized all of my insecurities about the future of open-source drugs. But that's okay. I think there are business models wherein you can get this to work, even under the relatively onerous contemporary FDA burden. To answer a few questions. I think sibiromycin is not a bad candidate for several reasons: 1. (I'm not sure I buy this one but) it's a NP derived and NP derived tends to do well. 2. A molecule with a similar mechanism has made it into phase III and phase I/II show only mild hepatotoxicity and water retention, which are prophylactically treatable with common drugs. 3. There is reportedly no bone marrow suppression in these compounds, and importantly it appears to be immune-neutral, which would make PBDs excellent therapies to run alongside immune-recruitment drugs."

Comments (58) + TrackBacks (0) | Category: Cancer | Clinical Trials | Drug Development | Drug Industry History | Infectious Diseases | Patents and IP

October 20, 2014

Compound Properties: Starting a Renunciation

Email This Entry

Posted by Derek

I've been thinking a lot recently about compound properties, and what we use them for. My own opinions on this subject have been changing over the years, and I'm interested to see if I have any company on this.

First off, why do we measure things like cLogP, polar surface area, aromatic ring count, and all the others? A quick (and not totally inaccurate) answer is "because we can", but what are we trying to accomplish? Well, we're trying to read the future a bit and decrease the horrendous failure rates for drug candidates, of course. And the two aspects that compound properties are supposed to help with are PK and tox.

Of the two, pharmacokinetics is the one with the better shot at relevance. But how fine-grained can we be with our measurements? I don't think it's controversial to say that compounds with really high cLogP values are going to have, on average, more difficult PK, for various reasons. Compounds with lots of aromatic rings in them are, on average, going to have more difficult PK, too. But how much is "lots" or "really high"? That's the problem, because I don't think that you can draw a useful line and say that things on one side of it are mostly fine, and things on the other are mostly not. There's too much overlap, and too many exceptions. The best you can hope for, if you're into line-drawing, is to draw one up pretty far into the possible range and say that things below it may or may not be OK, but things above it have a greater chance of being bad. (This, to my mind, is all that we mean by all the "Rule of 5" stuff). But what good does that do? Everyone doing drug discovery already knows that much, or should. Where we get into trouble is when we treat these lines as if they were made of electrified barbed wire.

That's because of a larger problem with metrics aimed at PK: PK is relatively easy data to get. When in doubt, you should just dose the compound and find out. This makes predicting PK problems a lower-value proposition - the real killer application would be predicting toxicology problems. I fear that over the years many rule-of-five zealots have confused these two fields, out of a natural hope that something can be done about the latter (or perhaps out of thinking that the two are more related than they really are). That's unfortunate, because to my mind, this is where compound property metrics get even less useful. That recent AstraZeneca paper has had me thinking, the one where they state that they can't reproduce the trends reported by Pfizer's group on the influences of compound properties. If you really can take two reasonably-sized sets of drug discovery data and come to opposite conclusions about this issue, what hope does this approach have?

Toxicology is just too complicated, I think, for us to expect that any simple property metrics can tell us enough to be useful. That's really annoying, because we could all really use something like that. But increasingly, I think we're still on our own, where we've always been, and that we're just trying to make ourselves feel better when we think otherwise. That problem is particularly acute as you go up the management ladder. Avoiding painful tox-driven failures is such a desirable goal that people are tempted to reach for just about anything reasonable-sounding that holds out hope for it. And this one (compound property space policing) has many other tempting advantages - it's cheap to implement, easy to measure, and produces piles of numbers that make for data-rich presentations. Even the managers who don't really know much chemistry can grasp the ideas behind it. How can it not be a good thing?

Especially when the alternative is so, so. . .empirical. So case-by-case. So disappointingly back-to-where-we-started. I mean, getting up in front of the higher-ups and telling them that no, we're not doing ourselves much good by whacking people about aromatic ring counts and nitrogen atom counts and PSA counts, etc., that we're just going to have to take the compounds forward and wait and see like we always have. . .that doesn't sound like much fun, does it? This isn't what anyone is wanting to hear. You're going to do a lot better if you can tell people that you've Identified The Problem, and How to Address It, and that this strategy is being implemented right now, and here are the numbers to prove it. Saying, in effect, that we can't do anything about it runs the risk of finding yourself replaced by someone who will say that we can.

But all that said, I really am losing faith in property-space metrics as a way to address toxicology. The only thing I'm holding on to are some of the structure-based criteria. I really do, for example, think that quinones are bad news. I think if you advance a hundred quinones into the clinic, that a far higher percentage of them will fail due to tox and side effects than a hundred broadly similar non-quinones. Same goes for rhodanines, and a few other classes, those "aces in the PAINS deck" I referred to the other day. I'm still less doctrinaire about functional groups than I used to be, but I still have a few that I balk at.

And yes, I know that there are drugs with all these groups in them. But if you look at the quinones, for example, you find mostly cytotoxics and anti-infectives which are cytotoxins with some selectivity for non-mammalian cells. If you're aiming at a particularly nasty target (resistant malaria, pancreatic cancer), go ahead and pull out all the stops. But I don't think anyone should cheerfully plow ahead with such structures unless there are such mitigating circumstances, or at least not without realizing the risks that they're taking on.

But this doesn't do us much good, either - most medicinal chemists don't want to advance such compounds anyway. In fact, rather than being too permissive about things like quinones, most of us are probably too conservative about the sorts of structures we're willing to deal with. There are a lot of funny-looking drugs out there, as it never hurts to remind oneself. Peeling off the outer fringe of these (and quinones are indeed the outer fringe) isn't going to increase anyone's success rate much. So what to do?

I don't have a good answer for that one. I wish I did. It's a rare case when we can say, just by looking at its structure, that a particular compound just won't work. I've been hoping that the percentages would allow us to say more than that about more compounds. But I'm really not sure that they do, at least not to the extent that we need them to, and I worry that we're kidding ourselves when we pretend otherwise.

Comments (32) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico

October 17, 2014

More on "Metabolite Likeness" as a Predictor

Email This Entry

Posted by Derek

A recent computational paper that suggested that similarity to known metabolites could help predict successful drug candidates brought in a lot of comments around here. Now the folks at Cambridge MedChem Consulting have another look at it here.

The big concern (as was expressed by some commenters here as well) is the Tanimoto similarity cutoff of 0.5. Does that make everything look too similar, or not? CMC has some numbers across different data sets, and suggests that this cutoff is, in fact, too permissive to allow for much discrimination. People with access to good comparison sets of compounds that made it and compounds that didn't - basically, computational chemists inside large industrial drug discovery organizations - will have a better chance to see how all this holds up.

Comments (6) + TrackBacks (0) | Category: Drug Development | In Silico

October 2, 2014

We Can't Calculate Our Way Out of This One

Email This Entry

Posted by Derek

Clinical trial failure rates are killing us in this industry. I don't think there's much disagreement on that - between the drugs that just didn't work (wrong target, wrong idea) and the ones that turn out to have unexpected safety problems, we incinerate a lot of money. An earlier, cheaper read on either of those would transform drug research, and people are willing to try all sorts of things to those ends.

One theory on drug safety is that there are particular molecular properties that are more likely to lead to trouble. There have been several correlations proposed between high logP (greasiness) and tox liabilities, multiple aromatic rings and tox, and so on. One rule proposed in 2008 by a group at Pfizer is that clogP >3 and total polar surface area less than 75 square angstroms is a good cutoff - compounds on the other side of it are about 2.5 times more likely to run into trouble. But here's a paper in MedChemComm that asks if any of this has any validity:

What is the likelihood of real success in avoiding attrition due to toxicity/safety from using such simple metrics? As mentioned in the beginning, toxicity can arise from a wide variety of reasons and through a plethora of complex mechanisms similar to some of the DMPK endpoints that we are still struggling to avoid. In addition to the issue of understanding and predicting actual toxicity, there are other hurdles to overcome when doing this type of historical analysis that are seldom discussed.

The first of these is making sure that you're looking at the right set of failed projects - that is, ones that really did fail because of unexpected compound-associated tox, and not some other reason (such as unexpected mechanism-based toxicity, which is another issue). Or perhaps a compound could have been good enough to make it on its own under other circumstances, but the competitive situation made it untenable (something else came up with a cleaner profile at about the same time). Then there's the problem of different safety cutoffs for different therapeutic areas - acceptable tox for a pancreatic cancer drug will not cut it for type II diabetes, for example.

The authors did a thorough study of 130 AstraZeneca development compounds, with enough data to work out all these complications. (This is the sort of thing that can only be done from inside a company's research effort - you're never going to have enough information, working from outside). What they found, right off, was that for this set of compounds the Pfizer rule was completely inverted. The compounds on the too-greasy side actually had shown fewer problems (!) The authors looked at the data sets from several different angles, and conclude that the most likely explanation is that the rule is just not universally valid, and depends on the dataset you start with.

The same thing happens when you look at the fraction of sp3 carbons, which is a characteristic (the "Escape From Flatland" paper) that's also been proposed to correlate with tox liabilities. The AZ set shows no such correlation at all. Their best hypothesis is that this is a likely correlation with pharmacokinetics that has gotten mixed in with a spurious correlation with toxicity (and indeed, the first paper on this trend was only talking about PK). And finally, they go back to an earlier properties-based model published by other workers at AstraZeneca, and find that it, too, doesn't seem to hold up on the larger, more curated data set. Their-take home message: ". . .it is unlikely that a model of simple physico-chemical descriptors would be predictive in a practical setting."

Even more worrisome is what happens when you take a look at the last few years of approved drugs and apply such filters to them (emphasis added):

To investigate the potential impact of following simple metric guidelines, a set of recently approved drugs was classified using the 3/75 rule (Table 3). The set included all small molecule drugs approved during 2009–2012 as listed on the ChEMBL website. No significant biases in the distribution of these compounds can be seen from the data presented in Table 3. This pattern was unaffected if we considered only oral drugs (45) or all of the drugs (63). The highest number of drugs ends up in the high ClogP/high TPSA class and the class with the lowest number of drugs is the low ClogP/low TPSA. One could draw the conclusion that using these simplistic approaches as rules will discard the development of many interesting and relevant drugs.

One could indeed. I hadn't seen this paper myself until the other day - a colleague down the hall brought it to my attention - and I think it deserves wider attention. A lot of drug discovery organizations, particularly the larger ones, use (or are tempted to use) such criteria to rank compounds and candidates, and many of us are personally carrying such things around in our heads. But if these rules aren't valid - and this work certainly makes it look as if they aren't - then we should stop pretending as if they are. That throws us back into a world where we have trouble distinguishing troublesome compounds from the good ones, but that, it seems, is the world we've been living in all along. We'd be better off if we just admitted it.

Comments (25) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | Toxicology

September 16, 2014

Lilly Steps In for AstraZeneca's Secretase Inhibitor

Email This Entry

Posted by Derek

Today brings news of a deal with AstraZeneca to help develop AZ's beta-secretase inhibitor, AZD3293 (actually an Astex compound, developed through fragment-based methods). AZ has been getting out of CNS indications for some time now, so they really did need a partner here, and Lilly lost their own beta-secretase compound last year. So this move doesn't come as too much of a shock, but it does reaffirm Lilly's bet-the-ranch approach to Alzheimer's.

This compound was used by AZ in their defense against being taken over by Pfizer, but (as that link in the first paragraph shows), not everyone was buying their estimated chances of success (9%). Since the overall chances for success in Alzheimer's, historically, have ranged between zero and 1%, depending on what you call a success, I can see their point. But beta-secretase deserves to have another good shot taken at it, and we'll see what happens. It'll takes years, though, before we find out - Alzheimer's trials are painfully slow, like the disease itself.

Update: I've had mail asking what I mean by AZ "getting out of CNS indications", when they still have a CNS research area. That's true, but it's a lot different than it used to be. The company got rid of most of its own infrastructure, and is doing more of a virtual/collaborative approach. So no, in one sense they haven't exited the field at all. But a lot of its former CNS people (and indeed, whole research sites) certainly exited AstraZeneca.

Comments (29) + TrackBacks (0) | Category: Alzheimer's Disease | Business and Markets | Drug Development

Update on Alnylam (And the Direction of Things to Come)

Email This Entry

Posted by Derek

Here's a look from Technology Review at the resurgent fortunes of Alnylam and RNA interference (which I blogged about here).

But now Alnylam is testing a drug to treat (familial amyloid polyneuropathy) in advanced human trials. It’s the last hurdle before the company will seek regulatory approval to put the drug on the market. Although it’s too early to tell how well the drug will alleviate symptoms, it’s doing what the researchers hoped it would: it can decrease the production of the protein that causes FAP by more than 80 percent.

This could be just the beginning for RNAi. Alnylam has more than 11 drugs, including ones for hemophilia, hepatitis B, and even high cholesterol, in its development pipeline, and has three in human trials —progress that led the pharmaceutical company Sanofi to make a $700 million investment in the company last winter. Last month, the pharmaceutical giant Roche, an early Alnylam supporter that had given up on RNAi, reversed its opinion of the technology as well, announcing a $450 million deal to acquire the RNAi startup Santaris. All told, there are about 15 RNAi-based drugs in clinical trials from several research groups and companies.

“The world went from believing RNAi would change everything to thinking it wouldn’t work, to now thinking it will,” says Robert Langer, a professor at MIT, and one of Alnylam’s advisors.

Those Phase III results will be great to see - that's the real test of a technology like this one. A lot of less daring ideas have fallen over when exposed to that much of a reality check. If RNAi really has turned the corner, though, I think it could well be just the beginning of a change coming over the pharmaceutical industry. Biology might be riding over the hill, after an extended period of hearing hoofbeats and seeing distant clouds of dust.

There was a boom in this sort of thinking during the 1980s, in the early days of Genentech and Biogen (and others long gone, like Cetus). Proteins were going to conquer the world, with interferon often mentioned as the first example of what was sure to be a horde of new drugs. Then in the early 1990s there was a craze for antisense, which was going to remake the whole industry. Antibodies, though, were surely a big part of the advance scouting party - many people are still surprised when they see how many of the highest-grossing drugs are antibodies, even though they're often for smaller indications.

And the hype around RNA therapies did reach a pretty high level a few years ago, but this (as Langer's quote above says) was followed by a nasty pullback. If it really is heading for the big time, then we should all be ready for some other techniques to follow. Just as RNAi built on the knowledge gained during the struggle to realize antisense, you'd have to think that Moderna's mRNA therapy ideas have learned from the RNAi people, and that the attempts to do CRISPR-style gene editing in humans have the whole biologic therapy field to help them out. Science does indeed march on, and we might possibly be getting the hang of some of these things.

And as I warned in that last link, that means we're in for some good old creative destruction in this industry if that happens. Some small-molecule ideas are going to go right out the window, and following them (through a much larger window) could be the whole rare-disease business model that so many companies are following these days. Many of those rare diseases are just the sorts of things that could be attacked more usefully at their root cause via genomic-based therapies, so if those actually start to work, well. . .

This shouldn't be news to anyone who's following the field closely, but these things move slowly enough that they have a way of creeping up on you unawares. Come back in 25 years, and the therapeutic landscape might be a rather different-looking place.

Comments (18) + TrackBacks (0) | Category: Biological News | Business and Markets | Clinical Trials | Drug Development

September 3, 2014

A Question: Monoclonal Antibodies in the Clinic

Email This Entry

Posted by Derek

A reader sends along this query, and since I've never worked around monoclonal antibodies, I thought I'd ask the crowd: how much of a read on safety do you get with a mAb in Phase I? How much Phase I work would one feel necessary to feel safe going on to Phase II, from a tox/safety standpoint? Any thoughts are welcome. I suspect the answer is greatly going to depend on what said antibody is being raised to target.

Comments (16) + TrackBacks (0) | Category: Drug Development | Toxicology

August 28, 2014

Drug Repurposing

Email This Entry

Posted by Derek

A reader has sent along the question: "Have any repurposed drugs actually been approved for their new indication?" And initially, I thought, confidently but rather blankly, "Well, certainly, there's. . . and. . .hmm", but then the biggest example hit me: thalidomide. It was, infamously, a sedative and remedy for morning sickness in its original tragic incarnation, but came back into use first for leprosy and then for multiple myeloma. The discovery of its efficacy in leprosy, specifically erythema nodosum laprosum, was a complete and total accident, it should be noted - the story is told in the book Dark Remedy. A physician gave a suffering leprosy patient the only sedative in the hospital's pharmacy that hadn't been tried, and it had a dramatic and unexpected effect on their condition.

That's an example of a total repurposing - a drug that had actually been approved and abandoned (and how) coming back to treat something else. At the other end of the spectrum, you have the normal sort of market expansion that many drugs undergo: kinase inhibitor Insolunib is approved for Cancer X, then later on for Cancer Y, then for Cancer Z. (As a side note, I would almost feel like working for free for a company that would actually propose "insolunib" as a generic name. My mortgage banker might not see things the same way, though). At any rate, that sort of thing doesn't really count as repurposing, in my book - you're using the same effect that the compound was developed for and finding closely related uses for it. When most people think of repurposing, they're thinking about cases where the drug's mechanism is the same, but turns out to be useful for something that no one realized, or those times where the drug has another mechanism that no one appreciated during its first approval.

Eflornithine, an ornithine decarboxylase inhibitor, is a good example - it was originally developed as a possible anticancer agent, but never came close to being submitted for approval. It turned out to be very effective for trypanosomiasis (sleeping sickness). Later, it was approved for slowing the growth of unwanted facial hair. This led, by the way, to an unfortunate and embarrassing period where the compound was available as a cream to improve appearance in several first-world countries, but not as a tablet to save lives in Africa. Aventis, as they were at the time, partnered with the WHO to produce the compound again and donated it to the agency and to Doctors Without Borders. (I should note that with a molecular weight of 182, that eflornithine just barely missed my no-larger-than-aspirin cutoff for the smallest drugs on the market).

Drugs that affect the immune system (cyclosporine, the interferons, anti-TNF antibodies etc.) are in their own category for repurposing, I'd say, They've had particularly broad therapeutic profiles, since that's such a nexus for infectious disease, cancer, inflammation and wound healing, and (naturally) autoimmune diseases of all sorts. Orencia (abatacept) is an example of this. It's approved for rheumatoid arthritis, but has been studied in several other conditions, and there's a report that it's extremely effective against a common kidney condition, focal segmental glomerulosclerosis. Drugs that affect the central or peripheral nervous system also have Swiss-army-knife aspects, since that's another powerful fuse box in a living system. The number of indications that a beta-blocker like propanolol has seen is enough evidence on its own!

C&E News did a drug repurposing story a couple of years ago, and included a table of examples. Some others can be found in this Nature Reviews Drug Discovery paper from 2004. I'm not aware of any new repurposing/repositioning approvals since then, but there's an awful lot of preclinical and clinical activity going on.

Comments (35) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | Regulatory Affairs

August 26, 2014

A New Look at Phenotypic Screening

Email This Entry

Posted by Derek

There have been several analyses that have suggested that phenotypic drug discovery was unusually effective in delivering "first in class" drugs. Now comes a reworking of that question, and these authors (Jörg Eder, Richard Sedrani, and Christian Wiesmann of Novartis) find plenty of room to question that conclusion.

What they've done is to deliberately focus on the first-in-class drug approvals from 1999 to 2013, and take a detailed look at their origins. There have been 113 such drugs, and they find that 78 of them (45 small molecules and 33 biologics) come from target-based approaches, and 35 from "systems-based" approaches. They further divide the latter into "chemocentric" discovery, based around known pharmacophores, and so on, versus pure from-the-ground-up phenotypic screening, and the 33 systems compounds then split out 25 to 8.

As you might expect, a lot of these conclusions depend on what you classify as "phenotypic". The earlier paper stopped at the target-based/not target-based distinction, but this one is more strict: phenotypic screening is the evaluation of a large number of compounds (likely a random assortment) against a biological system, where you look for a desired phenotype without knowing what the target might be. And that's why this paper comes up with the term "chemocentric drug discovery", to encompass isolation of natural products, modification of known active structures, and so on.

Such conclusions also depend on knowing what approach was used in the original screening, and as everyone who's written about these things admits, this isn't always public information. The many readers of this site who've seen a drug project go from start to finish will appreciate how hard it is to find an accurate retelling of any given effort. Stuff gets left out, forgotten, is un- (or over-)appreciated, swept under the rug, etc. (And besides, an absolutely faithful retelling, with every single wrong turn left in, would be pretty difficult to sit through, wouldn't it?) At any rate, by the time a drug reaches FDA approval, many of the people who were present at the project's birth have probably scattered to other organizations entirely, have retired or been retired against their will, and so on.

But against all these obstacles, the authors seem to have done as thorough a job as anyone could possibly do. So looking further at their numbers, here are some more detailed breakdowns. Of those 45 first-in-class small molecules, 21 were from screening (18 of those high-throughput screening, 1 fragment-based, 1 in silico, and one low-throughput/directed screening). 18 came from chemocentric approaches, and 6 from modeling off of a known compound.

Of the 33 systems-based drugs, those 8 that were "pure phenotypic" feature one antibody (alemtuzumab) which was raised without knowledge of its target, and seven small molecules: sirolimus, fingolimod, eribulin, daptomycin, artemether–lumefantrine, bedaquiline and trametinib. The first three of those are natural products, or derived from natural products. Outside of fingolimod, all of them are anti-infectives or antiproliferatives, which I'd bet reflects the comparative ease of running pure phenotypic assays with those readouts.

Here are the authors on the discrepancies between their paper and the earlier one:

At first glance, the results of our analysis appear to sig­nificantly deviate from the numbers previously pub­lished for first­-in­-class drugs, which reported that of the 75 first-­in-­class drugs discovered between 1999 and 2008, 28 (37%) were discovered through phenotypic screening, 17 (23%) through target-­based approaches, 25 (33%) were biologics and five (7%) came from other approaches. This discrepancy occurs for two reasons. First, we consider biologics to be target­-based drugs, as there is little philosophical distinction in the hypothesis­ driven approach to drug discovery for small­-molecule drugs versus biologics. Second, the past 5 years of our analysis time frame have seen a significant increase in the approval of first-­in-­class drugs, most of which were discovered in a target­-based fashion.

Fair enough, and it may well be that many of us have been too optimistic about the evidence for the straight phenotypic approach. But the figure we don't have (and aren't going to get) is the overall success rate for both techniques. The number of target-based and phenotypic-based screening efforts that have been quietly abandoned - that's what we'd need to have to know which one has the better delivery percentage. If 78/113 drugs, 69% of the first-in-class approvals from the last 25 years, have come from target-based approaches how does that compare with the total number of first-in-class drug projects? My own suspicion is that target-based drug discovery has accounted for more than 70% of the industry's efforts over that span, which would mean that systems-based approaches have been relatively over-performing. But there's no way to know this for sure, and I may just be coming up with something that I want to hear.

That might especially be true when you consider that there are many therapeutic areas where phenotypic screening basically impossible (Alzheimer's, anyone?) But there's a flip side to that argument: it means that there's no special phenotypic sauce that you can spread around, either. The fact that so many of those pure-phenotypic drugs are in areas with such clear cellular readouts is suggestive. Even if phenotypic screeningwere to have some statistical advantage, you can't just go around telling people to be "more phenotypic" and expect increased success, especially outside anti-infectives or antiproliferatives.

The authors have another interesting point to make. As part of their analysis of these 113 first-in-class drugs, they've tried to see what the timeline is from the first efforts in the area to an approved drug. That's not easy, and there are some arbitrary decisions to be made. One example they give is anti-angiogenesis. The first report of tumors being able to stimulate blood vessel growth was in 1945. The presence of soluble tumor-derived growth factors was confirmed in 1968. VEGF, the outstanding example of these, was purified in 1983, and was cloned in 1989. So when did the starting pistol fire for drug discovery in this area? The authors choose 1983, which seems reasonable, but it's a judgment call.

So with all that in mind, they find that the average lead time (from discovery to drug) for a target-based project is 20 years, and for a systems-based drug it's been 25 years. They suggest that since target-based drug discovery has only been around since the late 1980s or so, that its impact is only recently beginning to show up in the figures, and that it's in much better shape than some would suppose.

The data also suggest that target-­based drug discovery might have helped reduce the median time for drug discovery and development. Closer examination of the differences in median times between systems­-based approaches and target­-based approaches revealed that the 5-­year median difference in overall approval time is largely due to statistically significant differences in the period from patent publication to FDA approval, where target-­based approaches (taking 8 years) took only half the time as systems­-based approaches (taking 16 years). . .

The pharmaceutical industry has often been criticized for not being sufficiently innovative. We think that our analysis indicates otherwise and perhaps even suggests that the best is yet to come as, owing to the length of time between project initiation and launch, new technologies such as high­-throughput screening and the sequencing of the human genome may only be starting to have a major impact on drug approvals. . .

Now that's an optimistic point of view, I have to say. The genome certainly still has plenty of time to deliver, but you probably won't find too many other people saying in 2014 that HTS is only now starting to have an impact on drug approvals. My own take on this is that they're covering too wide a band of technologies with such statements, lumping together things that have come in at different times during this period and which would be expected to have differently-timed impacts on the rate of drug discovery. On the other hand, I would like this glass-half-full view to be correct, since it implies that things should be steadily improving in the business, and we could use it.

But the authors take pains to show, in the last part of their paper, that they're not putting down phenotypic drug discovery. In fact, they're calling for it to be strengthened as its own discipline, and not (as they put it) just as a falling back to the older "chemocentric" methods of the 1980s and before:

Perhaps we are in a phase today similar to the one in the mid­-1980s, when systems-­based chemocentric drug discovery was largely replaced by target­-based approaches. This allowed the field to greatly expand beyond the relatively limited number of scaffolds that had been studied for decades and to gain access to many more pharmacologically active compound classes, pro­viding a boost to innovation. Now, with an increased chemical space, the time might be right to further broaden the target space and open up new avenues. This could well be achieved by investing in phenotypic screening using the compound libraries that have been established in the context of target-­based approaches. We therefore consider phenotypic screening not as a neoclassical approach that reverts to a supposedly more successful systems­-based method of the past, but instead as a logical evolution of the current target­-based activi­ties in drug discovery. Moreover, phenotypic screening is not just dependent on the use of many tools that have been established for target-­based approaches; it also requires further technological advancements.

That seems to me to be right on target: we probably are in a period just like the mid-to-late 1980s. In that case, though, a promising new technology was taking over because it seemed to offer so much more. Today, it's more driven by disillusionment with the current methods - but that means, even more, that we have to dig in and come up with some new ones and make them work.

Comments (7) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

August 22, 2014

The Palbociclib Saga: Or Why We Need a Lot of Drug Companies

Email This Entry

Posted by Derek

Science has an article by journalist Ken Garber on palbociclib, the Pfizer CDK4 compound that came up here the other day when we were discussing their oncology portfolio. You can read up on the details of how the compound was put in the fridge for several years, only to finally emerge as one of the company's better prospects. The roots of the project go back to about 1995 at Parke-Davis:

Because the many CDK family members are almost identical, “creating a truly selective CDK4 inhibitor was very difficult,” says former Parke-Davis biochemist Dave Fry, who co-chaired the project with chemist Peter Toogood. “A lot of pharmaceutical companies failed at it, and just accepted broad-spectrum CDK inhibitors as their lead compounds.” But after 6 years of work, the pair finally succeeded with the help of some clever screens that could quickly weed out nonspecific “dirty” compounds.

Their synthesis in 2001 of palbociclib, known internally as PD-0332991, was timely. By then, many dirty CDK inhibitors from other companies were already in clinical trials, but they worked poorly, if at all. Because they hit multiple CDK targets, these compounds caused too much collateral damage to normal cells. . .Eventually, most efforts to fight cancer by targeting the cell cycle ground to a halt. “Everything sort of got hung up, and I think people lost enthusiasm,” Slamon says.

PD-0332991 fell off the radar screen. Pfizer, which had acquired Warner-Lambert/Parke-Davis in 2000 mainly for the cholesterol drug Lipitor, did not consider the compound especially promising, Fry says, and moved it forward haltingly at best. “We had one of the most novel compounds ever produced,” Fry says, with a mixture of pride and frustration. “The only compound in its class.”

A major merger helped bury the PD-0332991 program. In 2003, Pfizer acquired Swedish-American drug giant Pharmacia, which flooded Pfizer's pipeline with multiple cancer drugs, all competing for limited clinical development resources. Organizational disarray followed, says cancer biologist Dick Leopold, who led cancer drug discovery at the Ann Arbor labs from 1989 to 2003. “Certainly there were some politics going on,” he says. “Also just some logistics with new management and reprioritization again and again.” In 2003, Pfizer shut down cancer research in Ann Arbor, which left PD-0332991 without scientists and managers who could demand it be given a chance, Toogood says. “All compounds in this business need an advocate.”

So there's no doubt that all the mergers and re-orgs at Pfizer slowed this compound down, and no doubt a long list of others, too. The problems didn't end there. The story goes on to show how the compound went into Phase I in 2004, but only got into Phase II in 2009. The problem is, well before that time it was clear that there were tumor types that should be more sensitive to CDK4 inhibition. See this paper from 2006, for example (and there were some before this as well).

It appears that Pfizer wasn't going to develop the compound at all (thus that long delay after Phase I). They made it available as a research tool to Selina Chen-Kiang at Weill Cornell, who saw promising results with mantle cell lymphoma, then Dennis Slamon and RIchard Finn at UCLA profiled the compound in breast cancer lines and took it into a small trial there, with even more impressive results. And at this point, Pfizer woke up.

Before indulging in a round of Pfizer-bashing, though, It's worth remembering that stories broadly similar to this are all too common. If you think that the course of true love never did run smooth, you should see the course of drug development. Warner-Lambert (for example) famously tried to kill Lipitor more than once during its path to the market, and it's a rare blockbuster indeed that hasn't passed through at least one near-death-experience along the way. It stands to reason: since the great majority of all drug projects die, the few that make it through are the ones that nearly died.

There are also uncounted stories of drugs that nearly lived. Everyone who's been around the industry for a while has, or has heard, tales of Project X for Target Y, which was going along fine and looked like a winner until Company Z dropped for Stupid Reason. . .uh, Aleph. (Ran out of letters there). And if only they'd realized this, that, and the other thing, that compound would have made it to market, but no, they didn't know what they had and walked away from it, etc. Some of these stories are probably correct: you know that there have to have been good projects dropped for the wrong reasons and never picked up again. But they can't all be right. Given the usual developmental success rates, most of these things would have eventually wiped out for some reason. There's an old saying among writers that the definition of a novel is a substantial length of narrative fiction that has something wrong with it. In the same way, every drug that's on the market has something wrong with it (usually several things), and all it takes is a bit more going wrong to keep it from succeeding at all.

So where I fault Pfizer in all this is in the way that this compound got lost in all the re-org shuffle. If it had developed more normally, its activity would have been discovered years earlier. Now, it's not like there are dozens of drugs that haven't made it to market because Pfizer dropped the ball on them - but given the statistics, I'll bet that there are several (two or three? five?) that could have made it through by now, if everyone hadn't been so preoccupied with merging, buying, moving, rearranging, and figuring out if they were getting laid off or not.

The good thing is that other companies stepped into the field on the basis of those earlier publications, and found CDK4/6 inhibitors of their own (notably Novartis and Lilly). This is why I think that huge mergers hurt the intellectual health of the drug industry. Take it to the reducio ad not all that absurdum of One Big Drug Company. If we had that, and only that, then whole projects and areas of research would inevitably get shelved, and there would be no one left to pick them up at all. (I'll also note, in passing, that should all of the CDK inhibitors make it to market, that there will be yahoos who decry the whole thing as nothing but a bunch of fast-follower me-too drugs, waste of time and money, profits before people, and so on. Watch for it.)

Comments (13) + TrackBacks (0) | Category: Cancer | Drug Development | Drug Industry History

August 21, 2014

Why Not Bromine?

Email This Entry

Posted by Derek

So here's a question for the medicinal chemists: how come we don't like bromoaromatics so much? I know I don't, but I have trouble putting my finger on just why. I know that there's a ligand efficiency argument to be made against them - all that weight, for one atom - but there are times when a bromine seems to be just the thing. There certainly are such structures in marketed drugs. Some of the bad feelings around them might linger from the sense that it's sort of unnatural element, as opposed to chlorine, which in the form of chloride is everywhere in living systems.

But bromide? Well, for what it's worth, there's a report that bromine may in fact be an essential element after all. That's not enough to win any arguments about putting it into your molecules - selenium's essential, too, and you don't see people cranking out the organoselenides. But here's a thought experiment: suppose you have two drug candidate structures, one with a chlorine on an aryl ring and the other with a bromine on the same position. If they have basically identical PK, selectivity, preliminary tox, and so on, which one do you choose to go on with? And why?

If you chose the chloro derivative (and I think that most medicinal chemists instinctively would, for just the same hard-to-articulate reasons we're talking about), then what split in favor of the bromo compound would be enough to make you favor it? How much more activity, PK coverage, etc. do you need to make you willing to take a chance on it instead?

Comments (36) + TrackBacks (0) | Category: Drug Development | Odd Elements in Drugs | Pharmacokinetics | Toxicology

August 20, 2014

Did Pfizer Cut Back Some of Its Best Compounds?

Email This Entry

Posted by Derek

John LaMattina has a look at Pfizer's oncology portfolio, and what their relentless budget-cutting has been doing to it. The company is taking some criticism for having outlicensed two compounds (tremelimumab to AstraZeneca and neratinib to Puma) which seem to be performing very well after Pfizer ditched them. Here's LaMattina (a former Pfizer R&D head, for those who don't know):

Unfortunately, over 15 years of mergers and severe budget cuts, Pfizer has not been able to prosecute all of the compounds in its portfolio. Instead, it has had to make choices on which experimental medicines to keep and which to set aside. However, as I have stated before, these choices are filled with uncertainties as oftentimes the data in hand are far from complete. But in oncology, Pfizer seems to be especially snake-bit in the decisions it has made.

That goes for their internal compounds, too. As LaMattina goes one to say, palbociclib is supposed to be one of their better compounds, but it was shelved for several years due to more budget-cutting and the belief that the effort would be better spent elsewhere. It would be easy for an outside observer to whack away at the company and wonder how incompetent they could be to walk away from all these winners, but that really isn't fair. It's very hard in oncology to tell what's going to work out and what isn't - impossible, in fact, after compounds have progressed to a certain stage. The only way to be sure is to take these things on into the clinic and see, unfortunately (and there you have one of the reasons things are so expensive around here).

Pfizer brought up more interesting compounds than it later was able to develop. It's a good question to wonder what they could have done with these if they hadn't been pursuing their well-known merger strategy over these years, but we'll never know the answer to that one. The company got too big and spent too much money, and then tried to cure that by getting even bigger. Every one of those mergers was a big disruption, and you sometimes wonder how anyone kept their focus on developing anything. Some of its drug-development choices were disastrous and completely their fault (the Exubera inhaled-insulin fiasco, for example), but their decisions in their oncology portfolio, while retrospectively awful, were probably quite defensible at the time. But if they hadn't been occupied with all those upheavals over the last ten to fifteen years, they might have had a better chance on focusing on at least a few more of their own compounds.

Their last big merger was with Wyeth. If you take Pfizer's R&D budget and Wyeth's and add them, you don't get Pfizer's R&D post-merger. Not even close. Pfizer's R&D is smaller now than their budget was alone before the deal. Pyrrhus would have recognized the problem.

Comments (20) + TrackBacks (0) | Category: Business and Markets | Cancer | Drug Development | Drug Industry History

August 19, 2014

Don't Optimize Your Plasma Protein Binding

Email This Entry

Posted by Derek

Here's a very good review article in J. Med. Chem. on the topic of protein binding. For those outside the field, that's the phenomenon of drug compounds getting into the bloodstream and then sticking to one or more blood proteins. Human serum albumin (HSA) is a big player here - it's a very abundant blood protein that's practically honeycombed with binding sites - but there are several others. The authors (from Genentech) take on the disagreements about whether low plasma protein binding is a good property for drug development (and conversely, whether high protein binding is a warning flag). The short answer, according to the paper: neither one.

To further examine the trend of PPB for recently approved drugs, we compiled the available PPB data for drugs approved by the U.S. FDA from 2003 to 2013. Although the distribution pattern of PPB is similar to those of the previously marketed drugs, the recently approved drugs generally show even higher PPB than the previously marketed drugs (Figure 1). The PPB of 45% newly approved drugs is >95%, and the PPB of 24% is >99%. These data demonstrate that compounds with PPB > 99% can still be valuable drugs. Retrospectively, if we had posed an arbitrary cutoff value for the PPB in the drug discovery stage, we could have missed many valuable medicines in the past decade. We suggest that PPB is neither a good nor a bad property for a drug and should not be optimized in drug design.

That topic has come up around here a few times, as could be expected - it's a standard med-chem argument. And this isn't even the first time that a paper has come out warning people that trying to optimize on "free fraction" is a bad idea: see this 2010 one from Nature Reviews Drug Discovery.

But it's clearly worth repeating - there are a lot of people who get quite worked about about this number - in some cases, because they have funny-looking PK and are trying to explain it, or in some cases, just because it's a number and numbers are good, right?

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics

July 25, 2014

The Antibiotic Gap: It's All of the Above

Email This Entry

Posted by Derek

Here's a business-section column at the New York Times on the problem of antibiotic drug discovery. To those of us following the industry, the problems of antibiotic drug discovery are big pieces of furniture that we've lived with all our lives; we hardly even notice if we bump into them again. You'd think that readers of the Times or other such outlets would have come across the topic a few times before, too, but there must always be a group for which it's new, no matter how many books and newspaper articles and magazine covers and TV segments are done on it. It's certainly important enough - there's no doubt that we really are going to be in big trouble if we don't keep up the arms race against the bacteria.

This piece takes the tack of "If drug discovery is actually doing OK, where are the new antibiotics?" Here's a key section:

Antibiotics face a daunting proposition. They are not only becoming more difficult to develop, but they are also not obviously profitable. Unlike, say, cancer drugs, which can be spectacularly expensive and may need to be taken for life, antibiotics do not command top dollar from hospitals. What’s more, they tend to be prescribed for only short periods of time.

Importantly, any new breakthrough antibiotic is likely to be jealously guarded by doctors and health officials for as long as possible, and used only as a drug of last resort to prevent bacteria from developing resistance. By the time it became a mass-market drug, companies fear, it could be already off patent and subject to competition from generics that would drive its price down.

Antibiotics are not the only drugs getting the cold shoulder, however. Research on treatments to combat H.I.V./AIDS is also drying up, according to the research at Yale, mostly because the cost and time required for development are increasing. Research into new cardiovascular therapies has mostly stuck to less risky “me too” drugs.

This mixes several different issues, unfortunately, and if a reader doesn't follow the drug industry (or medical research in general), then they may well not realize this. (And that's the most likely sort of reader for this article - people who do follow such things have heard all of this before). The reason that cardiovascular drug research seems to have waned is that we already have a pretty good arsenal of drugs for the most common cardiovascular conditions. There are a huge number of options for managing high blood pressure, for example, and they're mostly generic drugs by now. The same goes for lowering LDL: it's going to be hard to beat the statins, especially generic Lipitor. But there is a new class coming along targeting PCSK9 that is going to try to do just that. This is a very hot area of drug development (as the author of the Times column could have found without much effort), although the only reason it's so big is that PCSK9 is the only pathway known that could actually be more effective at lowering LDL than the statins. (How well it does that in the long term, and what the accompanying safety profile might be, are the subject of ongoing billion-dollar efforts). The point is, the barriers to entry in cardiovascular are, by now, rather high: a lot of good drugs are known that address a lot of the common problems. If you want to go after a new drug in the space, you need a new mechanism, like PCSK9 (and those are thin on the ground), or you need to find something that works against some of the unmet needs that people have already tried to fix and failed (such as stroke, a notorious swamp of drug development which has swallowed many large expeditions without a trace).

To be honest, HIV is a smaller-scale version of the same thing. The existing suite of therapies is large and diverse, and keeps the disease in check in huge numbers of patients. All sorts of other mechanisms have been tried as well, and found wanting in the development stage. If you want to find a new drug for HIV, you have a very high entry barrier again, because pretty most of the reasonable ways to attack the problem have already been tried. The focus now is on trying to "flush out" latent HIV from cells, which might actually lead to a cure. But no one knows yet if that's feasible, how well it will work when it's tried, or what the best way to do it might be. There were headlines on this just the other day.

The barriers to entry in the antibiotic field area similarly high, and that's what this article seems to have missed completely. All the known reasonable routes of antibiotic action have been thoroughly worked over by now. As mentioned here the other day, if you just start screening your million-compound libraries against bacteria to see what kills them, you will find a vast pile of stuff that will kill your own cells, too, which is not what you want, and once you've cleared those out, you will find a still-pretty-vast pile of compounds that work through mechanisms that we already have antibiotics targeting. Needles in haystacks have nothing on this.

In fact, a lot of not-so-reasonable routes have been worked over, too. I keep sending people to this article, which is now seven years old and talks about research efforts even older than that. It's the story of GlaxoSmithKline's exhaustive antibiotics research efforts, and it also tells you how many drugs they got out of it all in the end: zip. Not a thing. From what I can see, the folks who worked on this over the last fifteen or twenty years at AstraZeneca could easily write the same sort of article - they've published all kinds of things against a wide variety of bacterial targets, and I don't think any of it has led to an actual drug.

This brings up another thing mentioned in the Times column. Here's the quote:

This is particularly striking at a time when the pharmaceutical industry is unusually optimistic about the future of medical innovation. Dr. Mikael Dolsten, who oversees worldwide research and development at Pfizer, points out that if progress in the 15 years until 2010 or so looked sluggish, it was just because it takes time to figure out how to turn breakthroughs like the map of the human genome into new drugs.

Ah, but bacterial genomes were sequenced before the human one was (and they're more simple, at that). Keep in mind also that proof-of-concept for new targets can be easier to obtain in bacteria (if you manage to find any chemical matter, that is). I well recall talking with a bunch of people in 1997 who were poring over the sequence data for a human pathogen, fresh off the presses, and their optimism about all the targets that they were going to find in there, and the great new approaches they were going to be able to take. They tried it. None of it worked. Over and over, none of it worked. People had a head start in this area, genomically speaking, with an easier development path than many other therapeutic areas, and still nothing worked.

So while many large drug companies have exited antibiotic research over the years, not all of them did. But the ones that stayed have poured effort and money, over and over, down a large drain. Nothing has come out of the work. There are a number of smaller companies in the space as well, for whom even a small success would mean a lot, but they haven't been having an easy time of it, either.

Now, one thing the Times article gets right is that the financial incentives for new antibiotics are a different thing entirely than the rest of the drug discovery world. Getting one of these new approaches in LDL or HIV to work would at least be highly profitable - the PCSK9 competitors certainly are working on that basis. Alzheimer's is another good example of an area that has yielded no useful drugs whatsoever despite ferocious amounts of effort, but people keep at it because the first company to find a real Alzheimer's drug will be very well rewarded indeed. (The Times article says that this hasn't been researched enough, either, which makes me wonder what areas have been). But any great new antibiotic would be shelved for emergencies, and rightly so.

But that by itself is not enough to explain the shortage of those great new antibiotics. It's everything at once: the traditional approaches are played out and the genomic-revolution stuff has been tried, so the unpromising economics makes the search for yet another approach that much harder.

Note: be sure to see the comments for perspectives from others who've also done antibiotic research, including some who disagree. I don't think we'll find anyone who says it's easy, though, but you never know.

Comments (58) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Infectious Diseases

July 24, 2014

Phenotypic Assays in Cancer Drug Discovery

Email This Entry

Posted by Derek

The topic of phenotypic screening has come up around here many times, as indeed it comes up very often in drug discovery. Give your compounds to cells or to animals and look for the effect you want: what could be simpler? Well, a lot of things could, as anyone who's actually done this sort of screening will be glad to tell you, but done right, it's a very powerful technique.

It's also true that a huge amount of industrial effort is going into cancer drug discovery, so you'd think that there would be a natural overlap between these: see if your compounds kill or slow cancer cells, or tumors in an animal, and you're on track, right? But there's a huge disconnect here, and that's the subject of a new paper in Nature Reviews Drug Discovery. (Full disclosure: one of the authors is a former colleague, and I had a chance to look over the manuscript while it was being prepared). Here's the hard part:

Among the factors contributing to the growing interest in phenotypic screening in drug discovery in general is the perception that, by avoiding oversimplified reductionist assumptions regarding molecular targets and instead focusing on functional effects, compounds that are discovered in phenotypic assays may be more likely to show clinical efficacy. However, cancer presents a challenge to this perception as the cell-based models that are typically used in cancer drug discovery are poor surrogates of the actual disease. The definitive test of both target hypotheses and phenotypic models can only be carried out in the clinic. The challenge of cancer drug discovery is to maximize the probability that drugs discovered by either biochemical or phenotypic methods will translate into clinical efficacy and improved disease control.

Good models in living systems, which are vital to any phenotypic drug discovery effort, are very much lacking in oncology. It's not that you can't get plenty of cancer cells to grow in a dish - they'll take over your other cell cultures if they get a chance. But those aren't the cells that you're going to be dealing with in vivo, not any more. Cancer cells tend to be genetically unstable, constantly throwing off mutations, and the in vitro lines are adapted to living in cull culture. That's true even if you implant them back into immune-compromised mice (the xenograft models). The number of drugs that look great in xenograft models and failed out in the real world is too large to count.

So doing pure phenotypic drug discovery against cancer is very difficult - you go down a lot of blind alleys, which is what phenotypic screening is supposed to prevent. The explosion of knowledge about cellular pathways in tumor cells has led to uncountable numbers of target-driven approaches instead, but (as everyone has had a chance to find out), it's rare to find a real-world cancer patient who can be helped by a single-target drug. Gleevec is the example that everyone thinks of, but the cruel truth is that it's the exceptional exception. All those newspaper articles ten years ago that heralded a wonderful era of targeted wonder drugs for cancer? They were wrong.

So what to do? This paper suggests that the answer is a hybrid approach:

For the purpose of this article, we consider ‘pure’ phenotypic screening to be a discovery process that identifies chemical entities that have desirable biological (phenotypic) effects on cells or organisms without having prior knowledge of their biochemical activity or mode of action against a specific molecular target or targets. However, in practice, many phenotypically driven discovery projects are not target-agnostic; conversely, effective target-based discovery relies heavily on phenotypic assays. Determining the causal relationships between target inhibition and phenotypic effects may well open up new and unexpected avenues of cancer biology.

In light of these considerations, we propose that in practice a considerable proportion of cancer drug discovery falls between pure PDD and TDD, in a category that we term ‘mechanism-informed phenotypic drug discovery’ (MIPDD). This category includes inhibitors of known or hypothesized molecular targets that are identified and/or optimized by assessing their effects on a therapeutically relevant phenotype, as well as drug candidates that are identified by their effect on a mechanistically defined phenotype or phenotypic marker and subsequently optimized for a specific target-engagement MOA.

I've heard these referred to as "directed phenotypic screens", and while challenging, it can be a very fruitful way to go. Balancing the two ways of working is the tricky part: you don't want to slack up on the model just so it'll give you results, if those results aren't going to be meaningful. And you don't want to be so dogmatic about your target ideas that you walk away from something that could be useful, but doesn't fit your scheme. If you can keep all these factors in line, you're a real drug discovery scientist, and no mistake.

How hard this is can be seen from the paper's Table 1, where they look over the oncology approvals since 1999, and classify them by what approaches were used for lead discovery and lead optimization. There's a pile of 21 kinase inhibitors (and eight other compounds) over in the box where both phases were driven by inhibition of a known target. And there are ten compounds whose origins were in straight phenotypic screening, with various paths forward after that. But the "mechanism-informed phenotypic screen" category is the shortest list of the three lead discovery approaches: seven compounds, optimized in various ways. (The authors are upfront about the difficulties of assembling this sort of overview - it can be hard to say just what really happened during discovery and development, and we don't have the data on the failures).

Of those 29 pure-target-based drugs, 18 were follow-ons to mechanisms that had already been developed. At this point, you'd expect to hear that the phenotypic assays, by contrast, delivered a lot more new mechanisms. But this isn't the case: 14 follow-ons versus five first-in-class. This really isn't what phenotypic screening is supposed to deliver (and has delivered in the past), and I agree with the paper that this shows how difficult it has been to do real phenotypic discovery in this field. The few assays that translate to the clinic tend to keep discovering the same sorts of things. (And once again, the analogy to antibacterials comes to mind, because that's exactly what happens if you do a straight phenotypic screen for antibacterials. You find the same old stuff. That field, too, has been moving toward hybrid target/phenotypic approaches).

The situation might be changing a bit. If you look at the drugs in the clinic (Phase II and Phase III), as opposed to the older ones that have made it all the way through, there are still a vast pile of target-driven ones (mostly kinase inhibitors). But you can find more examples of phenotypic candidates, and among them an unusually high proportion of outright no-mechanism-known compounds. Those are tricky to develop in this field:

In cases where the efficacy arises from the engagement of a cryptic target (or mechanism) other than the nominally identified one, there is potential for substan- tial downside. One of the driving rationales of targeted discovery in cancer is that patients can be selected by pre- dictive biomarkers. Therefore, if the nominal target is not responsible for the actions of the drug, an incorrect diagnostic hypothesis may result in the selection of patients who will — at best — not derive benefit. For example, multiple clinical trials of the nominal RAF inhibitor sorafenib in melanoma showed no benefit, regardless of the BRAF mutation status. This is consistent with the evidence that the primary target and pharmacodynamic driver of efficacy for sorafenib is actually VEGFR2. The more recent clinical success of the bona fide BRAF inhibitor vemurafenib in melanoma demonstrates that the target hypothesis of BRAF for melanoma was valid.

So, if you're going to do this mechanism-informed phenotypic screening, just how do you go about it? High-content screening techniques are one approach: get as much data as possible about the effects of your compounds, both at the molecular and cellular level (the latter by imaging). Using better cell assays is crucial: make them as realistic as you can (three-dimensional culture, co-culture with other cell types, etc.), and go for cells that are as close to primary tissue as possible. None of this is easy, or cheap, but the engineer's triangle is always in effect ("Fast, Cheap, Good: Pick Any Two").

Comments (22) + TrackBacks (0) | Category: Cancer | Drug Assays | Drug Development

July 15, 2014

K. C. Nicolaou on Drug Discovery

Email This Entry

Posted by Derek

K. C. Nicolaou has an article in the latest Angewandte Chemie on the future of drug discovery, which may seem a bit surprising, considering that he's usually thought of as Mister Total Synthesis, rather than Mister Drug Development Project. But I can report that it's relentlessly sensible. Maybe too sensible. It's such a dose of the common wisdom that I don't think it's going to be of much use or interest to people who are actually doing drug discovery - you've already had all these thoughts yourself, and more than once.

But for someone catching up from outside the field, it's not a bad survey at all. It gets across how much we don't know, and how much work there is to be done. And one thing that writing this blog has taught me is that most people outside of drug discovery don't have an appreciation of either of those things. Nicolaou's article isn't aimed at a lay audience, of course, which makes it a little more problematic, since many of the people who can appreciate everything he's saying will already know what he's going to say. But it does round pretty much everything up into one place.

Comments (58) + TrackBacks (0) | Category: Drug Development | Drug Industry History

July 14, 2014

How to Run a Drug Project: Are There Any Rules at All?

Email This Entry

Posted by Derek

Here's an article from David Shayvitz at Forbes whose title says it all: "Should a Drug Discovery Team Ever Throw in the Towel?" The easy answer to that is "Sure". The hard part, naturally, is figuring out when.

You don’t have to be an expensive management consultant to realize that it would be helpful for the industry to kill doomed projects sooner (though all have said it).

There’s just the prickly little problem of figuring out how to do this. While it’s easy to point to expensive failures and criticize organizations for not pulling the plug sooner, it’s also true that just about every successful drug faced some legitimate existential crisis along the way — at some point during its development , there was a plausible reason to kill the program, and someone had to fight like hell to keep it going.

The question at the heart of the industry’s productivity struggles is the extent to which it’s even possible to pick the winners (or the losers), and figuring out better ways of managing this risk.

He goes on to contrast two approaches to this: one where you have a small company, focused on one thing, with the idea being that the experienced people involved will (A) be very motivated to find ways to get things to work, and (B) motivated to do something else if the writing ever does show up on the wall. The people doing the work should make the call. The other approach is to divide that up: you set things up with a project team whose mandate is to keep going, one way or another, dealing with all obstacles as best they can. Above them is a management team whose job it is to stay a bit distant from the trenches, and be ready to make the call of whether the project is still viable or not.

As Shayvitz goes on to say, quite correctly, both of these approaches can work, and both of them can run off the rails. In my view, the context of each drug discovery effort is so variable that it's probably impossible to say if one of these is truly better than the other. The people involved are a big part of that variability, too, and that makes generalizing very risky.

The big risk (in my experience) with having execution and decision-making in the same hands is that projects will run on for too long. You can always come up with more analogs to try, more experiments to run, more last-ditch efforts to take a crack it. Coming up with those things is, I think, better than not coming up with them, because (as Shayvitz mentions) it's hard to think of a successful drug that hasn't come close to dying at least once during its development. Give up too easily, and nothing will ever work at all.

But it's a painful fact that not every project can work, no matter how gritty and determined the team. We're heading out into the unknown with these drug candidates, and we find out things that we didn't know were there to be found out. Sometimes there really is no way to get the selectivity you need with the compound series you've chosen - heck, sometimes there's no way to get it with any compound series you could possibly choose, although that takes a long time to become obvious. Sometimes the whole idea behind the project is flawed from the start: blocking Kinase X will not, in fact, alter the course of Disease Y. It just won't. The hypothesis was wrong. An execute-at-all-costs team will shrug off these fatal problems, or attempt to shrug them off, for as long as you give them money.

But there's another danger waiting when you split off the executive decision-makers. If those folks get too removed from the project (or projects) then their ability to make good decisions is impaired. Just as you can have a warped perspective when you're right on top of the problems, you can have one when you're far away from them, too. It's tempting to thing that Distance = Clarity, but that's not a linear function, by any means. A little distance can certainly give you a lot of perspective, but if you keep moving out, things can start fuzzing back up again without anyone realizing what's going on.

That's true even if the managers are getting reasonably accurate reports, and we all know that that's not always the case in the real world. In many large organizations, there's a Big Monthly Meeting of some sort (or at some other regular time point) where projects are supposed to be reviewed by just those decision makers. These meetings are subject to terrible infections of Dog-And-Pony-itis. People get up to the front of the room and they tell everyone how great things are going. They minimize the flaws and paper over the mistakes. It's human nature. Anyone inclined to give a more accurate picture has a chance to see how that's going to look, when all the other projects are going Just Fine and everyone's Meeting Their Goals like it says on the form. Over time (and it may not take much time at all), the meeting floats away into its own bubble of altered reality. Managers who realize this can try to counteract it by going directly to the person running the project team in the labs, closing the office door, and asking for a verbal update on how things are really going, but sometimes people are so out of it that they mistake how things are going at the Big Monthly Meeting for what's really happening.

So yes indeed, you can (as is so often the case) screw things up in both directions. That's what makes it so hard to law down the law about how to run a drug discovery project: there are several ways to succeed, and the ways to mess them up are beyond counting. My own bias? I prefer the small-company back-to-the-wall approach, of being ready to swerve hard and try anything to make a project work. But I'd only recommend applying that to projects with a big potential payoff - it seems silly to do that sort of thing for anything less. And I'd recommend having a few people watching the process, but from as close as they can get without being quite of the project team themselves. Just enough to have some objectivity. Simple, eh? Getting this all balanced out is the hard part. Well, actually, the science is the hard part, but this is the hard part that we can actually do something about.

Comments (14) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Life in the Drug Labs

July 10, 2014

A Drug Candidate from NCATS

Email This Entry

Posted by Derek

I've written several times about the NIH's NCATS program, their foray into "translational medicine". Now comes this press release that the first compound from this effort has been picked up for development by a biopharma company.

The company is AesRx (recently acquired by Baxter), and the compound is AES-103. This came from the rare-disease part of the initiative, and the compound is targeting sickle cell anemia - from what I've seen, it appears to have come out of a phenotypic screening effort to identify anti-sickling agents. It appears to work by stabilizing the mutant hemoglobin into a form where it can't polymerize, which is the molecular-level problem underlying the sickle-cell phenotype. For those who don't know the history behind it, Linus Pauling and co-workers were among the first to establish that a mutation in the hemoglobin protein was the key factor. Pauling coined the term "molecular disease" to describe it, and should be considered one of the founding fathers of molecular biology for that accomplishment, among others.

So what's AES-103? Well, you'll probably be surprised: it's hydroxymethyl furfural, which I would not have put high on my list of things to screen. That page says that the NIH screened "over 700 compounds" for this effort, which I hope is a typo, because that's an insanely small number. I would have thought that detecting the inhibition of sickling would be something that could be automated. If you were only screening 700 compounds, would this be one of them?

For those outside the business, I base that opinion on several things. Furans in general do not have a happy history in drug development. They're too electron-rich to play well in vivo, for the most part. This one does have an electron-withdrawing aldehyde on it, but aldehydes have their own problems. They're fairly reactive, and they tend to have poor pharmacokinetics. Aldehydes are, for example, well-known as protease inhibitors in vitro, but most attempts to develop them as drugs have ended in failure. And the only thing that's left on the molecule, that hydroxymethyl, is problematic, too. Having a group like that next to an aromatic ring has also traditionally been an invitation to trouble - they tend to get oxidized pretty quickly. So overall, no, I wouldn't have bet on this compound. There must be a story about why it was tested, and I'd certainly like to know what it is.

But for all I know, those very properties are what are making it work. It may well be reacting with some residue on hemoglobin and stabilizing its structure in that way. The compound went into Phase I in 2011, and into Phase II last year, so it does have real clinical data backing it up at this point, and real clinical data can shut me right up. The main worry I'd have at this point is idiosyncratic tox in Phase III, which is always a worry, and more so, I'd think, with a compound that looks like this. We'll see how it goes.

Comments (19) + TrackBacks (0) | Category: Clinical Trials | Drug Development

June 26, 2014

Alzheimer's Bonds

Email This Entry

Posted by Derek

I wrote a couple of years ago about Andrew Lo of MIT, and his idea for securitization of drug discovery. For those of you who aren't financial engineers, that means raising funds by issuing securities (bonds and the like), and that's something that (as far as I know) has never been used to fund any specific drug development project.

Now Pharmalot has an update in an interview with Lo (who's recently published a paper on the idea in Science Translational Medicine). In particular, he's talking about issuing "Alzheimer's bonds", to pick a disease with no real therapies, a huge need for something, and gigantic cost barriers to finding something. Lo's concerned that the risks are too high for any one company to take on (and Eli Lilly might agree with him eventually), and wants to have some sort of public/private partnership floating the bonds.

We would create a fund that issues bonds. But if the private sector isn’t incentivized on its own, maybe the public sector can be incentivized to participate along with some members of the private sector. I will explain. But let’s look at the costs for a moment. The direct cost of treating the disease – never mind home care and lost wages – to Medicare and Medicaid for 2014 is estimated at $150 billion. We did a calculation and asked ourselves what kind of rate of return can we expect? We came up with $38.4 billion over 13 years. . .

. . .Originally, I thought it could come from the private sector. We’d create a fund – a mega fund of private investors, such as hedge funds, pension, various institutional investors. The question we asked ourselves is will they get a decent rate of return over a 13-year period? The answer, which is based on a best guess, given the risks of development and 64 projects, and we believed the answer was ‘no.’ It wouldn’t be like cancer or orphan diseases. It’s just not going to work. I come from that world. I talked to funds, philanthropists, medical experts. We did a reality check to see if we were off base. And it sounded like it would be difficult to create a fund to develop real drugs and still give investors a reasonable rate of return – 15% to 20%.

He's now going around to organizations like the Alzheimer's Association to see if there's some interest in giving this a try. I think that it's going to be a hard sell, but I'd actually like to see it happen. The difficulty is that there's no way to do this just a little bit to see if it works: you have to do it on a large scale to have any hope of success at all, and it's a big leap. In fact, the situation reminds one of. . .the situation with any given Alzheimer's drug idea. The clinical course of the disease, as we understand it now, does not give you any options other than a big, long, expensive path through the clinic (which is why it's the perfect example of an area where all the risk is concentrated on the expensive late stages). Lo is in the position of trying to address the go-big-or-go-home problem of Alzheimer's research with a remedy that requires investors to go big or go home.

The hope is that you could learn enough along the way to change the risk equation in media res. There's an old science fiction story by A. E. van Vogt, "Far Centaurus", which featured (among other things - van Vogt stories generally had several kitchen sinks included) a multidecade suspended-animation expedition to Alpha Centauri. The crew arrive there to find the planets already covered with human-populated cities, settled by the faster-than-light spaceships that were invented in the interim. We don't need FTL to fix Alzheimer's (fortunately), but there could be advances that would speed up the later parts of Lo's fund. But will this particular expedition ever launch?

Comments (24) + TrackBacks (0) | Category: Alzheimer's Disease | Business and Markets | Clinical Trials | Drug Development

June 25, 2014

Zafgen's Epoxide Pays Its Way

Email This Entry

Posted by Derek

I've written here before about Zafgen, a small startup targeting obesity therapy with an unusual covalent epoxide drug candidate. Last fall they cleared Phase II, and now they're going public.

Bruce Booth, whose firm has been the VC muscle behind the company, has an overview of how things have worked here. It's a good read for anyone interested in where small drug companies come from and what they have to be able to do to be able to survive. Many of the readers here will be familiar with the scientific part of this kind of story (as am I), but the financial and managerial parts have to be handled right, too, and mistakes with any of them can sink the whole effort.

I'll bet that if you'd asked Bruce or his partners back in 2006 for the odds, "Nice big IPO" would have been pretty far down their list of possibilities for the company, even if you'd stipulated success for their drug candidate. MetAP2 (the compound's target, which is something they didn't realize back then) is an interesting enzyme, and obesity has always been an interesting field (although not always in a good way). And on the scientific end, I'm most interested to see how that compound fares as it goes on through Phase III. It's a structure that a lot of us would have crossed off the list about three seconds after seeing it, and anything that extends the bounds of what's feasible in drug discovery is worth keeping an eye on. We very much need for more things to be possible.

Comments (13) + TrackBacks (0) | Category: Business and Markets | Diabetes and Obesity | Drug Development

June 24, 2014

Taking Risks - You Have To, So Do It Right

Email This Entry

Posted by Derek

In case you hadn't seen it, I wanted to highlight this post by Michael Gilman over at LifeSciVC. He's talking about risk in biotech, and tying it to the processes of generating, refining, and testing hypotheses. "The hypothesis", he says, "is one of the greatest intellectual creations of our species", and he's giving it its due.

I agree with him that time spent rethinking your hypothesis is often time well spent, whether for a single bench experiment or (most especially for) a big clinical trial. You need to be sure that you're asking the right question, that you're setting it up to be answered (one way or another), and that you're going to be able to get the maximum amount of useful information when that answer comes in, be it a Yes or a No. Sometimes this setup is obvious, but by the time you get to clinical trial design, it can be very tricky indeed.

For drug discovery, Gilman say, there are generally three kinds of hypothesis:

Biological hypothesis. What buttons do we believe this molecule pushes in target cells and what happens when these buttons are pushed? What biological pathways respond?

Clinical hypothesis. When these pathways are impacted, why do we believe it will move the needle on parameters that matter to patients and physicians? How will this intervention normalize physiology or reverse pathology?

Commercial hypothesis. If the first two hypotheses are correct, why do we believe anyone will care? Why will patients, physicians, and payers want this drug? How do we expect it to stand out from the crowd?

Many are the programs that have come to grief because of some sort of mismatch between these three. Clinical trials have been run uselessly because the original drug candidate was poorly characterized. Ostensibly successful trials have come to nothing because they were set up to answer the wrong questions. And ostensibly successful drug candidates have died in the marketplace because nobody wanted them. These are very expensive mistakes, and some extra time spent staring out the window while thinking about how to avoid them could have come in handy.

Gilman goes on to make a number of other good points about managing risk - for example, any experiment that shoulders a 100% share of the risk needs to be done as cheaply as possible. I would add, as a corollary, ". . .and not one bit cheaper", because that's another way that you can mess things up. At all times, you have to have a realistic idea of where you are in the process and what you're taking on. If you can find a way to do the crucial experiment without risking too much time or money, that's excellent news. On the other end of the scale, if there's no other way to do it than to put a big part of the company down on the table, then you'd better be sure that getting the answer is going to be worth that much effort. If it is, then be sure to spend the money to do it right - you're not going to get a second shot that easily.

The article also shows how you want to manage such risks across a broader portfolio. You'd like, if possible, to have plenty of programs that are front-loaded with their major risks, the sorts of things that you're not necessarily going to have to hopping around the room with crossed fingers while you're waiting for the Phase III data. It's impossible to take all the risk out of a Phase III, true - but if you can get some of the big questions out of the way earlier, without having to go that far, so much the better. A portfolio made up of several gigantic multiyear money furnaces - say, Alzheimer's or rheumatoid arthritis - will be something else entirely.

Comments (10) + TrackBacks (0) | Category: Clinical Trials | Drug Development

June 16, 2014

A "Right to Try" Debate

Email This Entry

Posted by Derek

For those interested in the "Right to Try" debate, BioCentury TV has a program that includes both the pro and the con sides of the debate. Worth a look to see how sharply opinions divide on this issue - and I don't see them converging any time soon.

Comments (22) + TrackBacks (0) | Category: Clinical Trials | Drug Development

June 10, 2014

Right To Try: Here We Go

Email This Entry

Posted by Derek

At what point should an experimental drug be made available for anyone to try it? The usual answer is "Unless you're enrolled in a clinical trial, then not until it's no longer an experimental drug". There's always compassionate use, but that's a hard topic to deal with, and one that has a different answer for every drug. Otherwise, the regulatory position is that volunteers take unproven drugs, and paying patients take the ones that have been through testing and review.

Colorado would like to try something different. They've passed a "Right To Try" law, which allow a therapy to be prescribed after it's passed Phase I and is under active investigation in Phase II. (Arizona, Louisiana, and Missouri are heading in the same direction). Insurance companies are not required to pay for these, it should be noted, nor are drug companies required to offer access. But if there are willing patients and a willing company, they can work together. The patient has to be suffering from a terminal illness, and has to have exhausted all approved therapies (if any).

As my fellow science-blogger David Kroll notes, though, this doesn't seem to add much past what was already allowed by the FDA:

I submit that this seemingly well-meaning but meaningless Colorado act does nothing but create a sense of false hope for similar families. The act does nothing more than assuage the concerns of lawmakers that they haven’t done enough to help their constituents. Instead, they’ve done a disservice.

At Science-Based Medicine there are similar thoughts from oncologist David Gorski. He goes into the details of the law that's under consideration in Arizona, and worries that it has such broad definitions that it opens the door to unscrupulous operators. I've worried about that as well. The Arizona law also allows the companies (at their discretion) to charge for providing the drugs (Colorado's allows for at-cost charging). The fear is that some unscrupulous operators could run the lightest, breeziest "Phase I" trial they possibly could, and then settle down into a long, lucrative spell of milking desperate patients while their "Phase II' trials creep along bit by bit. I realize that that's not a very nice thing to assume about people, but as a character in The War of the Worlds says about a similar predatory proposal, there are those who would do it cheerful. In fact, we already have evidence of people working in just this fashion.

These arguments have come up around here before, when Andy Grove (ex-Intel) proposed changing the structure of clinical trials (more here), and when Andy Eschenbach (ex-FDA) proposed something similar himself. Balancing these things is very hard indeed, and anyone who says it isn't either hasn't thought about the situation enough or is eager to sell you something. We've come to Chesterton's Gate again:

There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

Gorski (in that Science-Based Medicine link above) has many other good points, but there's one more that I'd like to emphasize. Some of these laws seem to be based on the idea that there are all sorts of wonderful cures out there that for some reason are tied up in sloth and red tape. It isn't so. Clearing out bureaucratic obstacles, while no picnic, is still a lot easier than discovering drugs. And allowing patients access after a Phase I does not offer very good odds, considering that almost all clinical failures take place later than that. Plenty of tox gets discovered later than that, too - only the fast and nasty stuff gets picked up in Phase I.

So overall, I think that these laws offer, for the most part, chances for people to feel good about themselves for voting for them, and chances for patients to get their hopes up, likely for no reason. Even with that, I don't see them doing much harm compared to the existing regulatory regime, except for the provisions that offer companies the chance to charge money. Those give the added bonus of opening the door to unscrupulous quacks, some of whom might have very creative ideas of what "at cost" might mean.

According to Biocentury, a Colorado company is already planning to offer access through this program:

Neuralstem Inc. (NYSE-M:CUR) plans to take advantage of Colorado's right-to-try law to offer patients access to an experimental, unapproved human neural stem cell (hNSC) therapy (NSI-566) to treat amyotrophic lateral sclerosis. President and CEO Richard Garr told BioCentury the company will not apply for an IND or other permission from FDA, noting that "the Colorado right-to-try law allows a company to prescribe for a fatal disease a therapy that has passed a Phase I safety trial and is being actively pursued in a Phase II trial." Garr said Neuralstem's hNSC ALS therapy meets these criteria, and the company plans to start a Phase III trial next year.
Neuralstem is in the process of training surgeons and identifying a hospital and neurologists in Colorado to administer the hNSC therapy. The therapy will be administered with the identical procedure, cells and training as a clinical trial, but without FDA oversight and without "the artificial limitations built around a trial," said Garr. "The whole point of right-to-try is it sits parallel to the clinical trial process, it is not instead of clinical trials." Neuralstem has not determined whether it will charge Colorado patients.

I know nothing about Neuralstem or their therapy, so I'll defer comment until I learn more. Looks like we're going to see how this works, whether we're ready for it or not.

Comments (25) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Regulatory Affairs

May 23, 2014

Two Looks At Drug Industry Productivity

Email This Entry

Posted by Derek

Matthew Herper has a really interesting story in Forbes on a new report that attempts to rank biopharma companies by their R&D abilities. Richard Evans of Sector and Sovereign Health (ex-Roche) has ranked companies not on their number of drugs, but on their early-stage innovation. He counts patents, for example, but not the later ones defending existing franchises, and he also looks to see how often these patents are cited by others. As for a company's portfolio, being early into a new therapeutic area counts for a lot more than following someone else, but at the same time, he's also trying to give points for companies that avoid "Not Invented Here" behavior (a tricky balance, I'd think). The full report can be purchased, but the parts that Herper have shared are intriguing.

Ranking the companies, he has (1) Bristol-Myers Squibb, (2) Celgene, (3) Vertex, (4) Gilead, and (5) Allergan. (Note that Allergan is currently being pursued by Valeant, who will, if they buy them, pursue their sworn vow to immediately gut the company's R&D). At the bottom of his table are (18) Novartis, (19) Regeneron, (20) Bayer, (21) Lilly, and (22) Alexion. (Note that Evans himself says that his analysis may be off for companies that have only launched one product in the ten years he's covering). I'm not sure what to make of this, to be honest, and I think what would give a better picture would be if the whole analysis were done again but only with the figures from about fifteen years ago to see if what's being measured really had an effect on the futures of the various companies. That would not be easy, but (as some of Herper's sources also say), without some kind of back-testing, it's hard to say if there's something valuable here.

You can tell that Evans himself almost certainly appreciates this issue from what he has to say about the current state of the industry and the methods used to evaluate it, so the lack of a retrospective analysis is interesting. Here's the sort of thing I mean:

Too often, Evans says, pharmaceutical executives instead use the industry’s low success rates as an argument that success is right around the corner. “A gambler that has lost everything he owned, just because he now has a strong hand doesn’t make him a good gambler,” Evans says. . .

True enough. Time and chance do indeed happeneth to them all, and many are the research organizations who've convinced themselves that they're good when they might just have been lucky. (Bad luck, on the other hand, while not everyone's favorite explanation, is still trotted out a lot more often. I suspect that AstraZeneca, during that bad period they've publicly analyzed, was sure that they were just having a bad run of the dice. After all, I'm sure that some of the higher-ups there thought that they were doing everything right, so what else could it be?)
Evans%20chart.jpg

But there's a particular chart from this report that I want to highlight. This one (in case that caption is too small) plots ten-year annualized net income returns against R&D spending, minus the cost of R&D capital. Everything has been adjusted for taxes and inflation. And that doesn't look too good, does it? These numbers would seem to line up with Bernard Munos' figures showing that industry productivity has been relatively constant, but only by constantly increased spending per successful drug. They also fit with this 2010 analysis from Morgan Stanley, where they warned that the returns on invested capital in pharma were way too high, considering the risks of failure.

So in case you thought, for some reason - food poisoning, concussion - that things had turned around, no such luck, apparently. That brings up this recent paper in Nature Reviews Drug Discovery, though, where several authors from Boston Consulting Group try to make the case that productivity is indeed improving. They're used peak sales as their measure of success, and they also believe that 2008 was the year when R&D spending started to get under control.
NRDD%20chart%201.jpg

Before 2008, the combined effects of declining value outputs and ever-increasing R&D spending drove a rapid decline in R&D productivity, with many analysts questioning whether the industry as a whole would be able to return its cost of capital on R&D spending. . .we have analysed the productivity ratio of aggregate peak sales relative to R&D spending in the preceding 4 years. From a low of 0.12 in 2008, this has more than doubled to 0.29 in 2013. Through multiple engagements with major companies, we have observed that at a relatively steady state of R&D spending across the value chain, a productivity ratio of between 0.25 and 0.35 is required for a drug developer to meet its cost of capital of ~9%. Put simply, a company spending $1 billion annually on R&D needs to generate — on average — new drug approvals with $250–350 million in peak sales every year. . . So, although not approaching the productivity ratios of the late 1990s and early 2000s, the industry moved back towards an acceptable productivity ratio overall in 2013.

I would like to hope that this is correct, but I'm really not sure. This recent improvement doesn't look like much, graphically, compared to the way that things used to be. There's also a real disagreement between these two analyses, which is apparent even though the BCG chart only goes back to 1994. Its take on the mid-1990s looks a lot better than the Evans one, and this is surely due (at least partly) to the peak-sales method of evaluation. Is that a better metric, or not? You got me. One problem with it (as the authors of this paper also admit) is that you have to use peak-sale estimates to arrive at the recent figures. So with that level of fuzz in the numbers, I don't know if their chart shows recent improvement at all (as they claim), or how much.

But even the BCG method would say that the industry has not been meeting its cost-of-capital needs for the last ten years or so, which is clearly not how you want to run things. If they're right, and the crawl out of the swamp has begun, then good. But I don't know why we should have managed to do that since 2008; I don't think all that much has changed. My fear is that their numbers show an improvement because of R&D cuts, in which case, we're likely going to pay for those in a few years with a smaller number of approved drugs - because, again, I don't think anyone's found any new formula to spend the money more wisely. We shall see.

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

May 20, 2014

Where the Talent Comes From

Email This Entry

Posted by Derek

I occasionally talk about the ecosystem of the drug industry being harmed by all the disruptions of recent years, and this post by Bruce Booth is exactly the sort of thing that fits that category. He's talking about how much time it takes to get experience in this field, and what's been happening to the flow of people:

Two recent events sparked my interest in this topic of where young talent develops and emerges in our industry. A good friend and “greybeard” med chemist forwarded me a note from a chemistry professor who was trying to find a spot for his “best student”, a new PhD chemist. I said we tended to not hire new graduates into our portfolio, but was saddened to hear of this start pupil’s job challenge. Shortly after that, I had dinner with a senior chemist from Big Pharma. He said the shortest-tenured chemist on his 30+ person team was 15-year veteran. His group had shrunk in the past and had never rehired. Since hiring a “trainee” post-doc chemist “counted” as an FTE on their books, they haven’t even implemented the traditional fellowship programs that exist elsewhere. Stories like these abound.

There is indeed a steady stream of big-company veterans who depart for smaller biopharma, bringing with them their experience (and usually a desire not to spend all their time holding pre-meeting meetings and the like, fortunately). But Booth is worried about a general talent shortage that could well be coming:

The short version of the dilemma is this: biotech startups have no margin for error around very tight timelines so can’t really “train” folks in drug discovery, and because of that they rely on bigger companies as the principle source for talent; but, at the same time, bigger firms are cutting back on research hiring and training, in part while offshoring certain science roles to other geographies, and yet are looking “outside” their walls for innovation from biotechs.

While I’d argue this talent flux is fine and maybe a positive right now, it’s a classic “chicken and egg” problem for the future. Without training in bigger pharma, there’s less talent for biotech; without that talent, biotech won’t make good drugs; without good biotech drugs, there’s no innovation for pharma, and then the end is nigh.

So if Big Pharma is looking for people from the small companies while the smaller companies are looking for people from Big Pharma, it does make you wonder where the supply will eventually come from. I share some of these worries, but at the same time, I think that it's possible to learn on the job at a smaller company, in the lower-level positions, anyway. And not everyone who's working at a larger company is learning what they should be. I remember once at a previous job when we were bringing in a med-chem candidate from a big company, a guy with 8 or 9 years experience. We asked him how he got along with the people who did the assays for his projects, and he replied that well, he didn't see them much, because they were over in another building, and they weren't supposed to be hanging around there, anyway. OK, then, what about the tox or formulations people? Well, he didn't go to those meetings much, because that was something that his boss was supposed to be in charge of. And so on, and so on. What was happening was that the structure of his company was gradually crippling this guy's career. He should have known more than he did; he should have been more experienced than he really was, and the problem looked to be getting worse every year. There's plenty of blame to go around, though - not only was the structure of his research organization messing this guy up, but he himself didn't even seem to be noticing it, which was also not a good sign. This is what Booth is talking about here:

. . .the “unit of work” in drug R&D is the team, not the individual, and success is less about single expertise and more about how it gets integrated with others. In some ways, your value to the organization begins to correlate with more generalist, integrative skills rather than specialist, academic ones; with a strong R&D grounding, this “utility player” profile across drug discovery becomes increasingly valuable.

And its very hard to learn these hard and soft things, i.e., grow these noses, inside of a startup environment with always-urgent milestones to hit in order to get the next dollop of funding, and little margin of error in the plan to get there. This is true in both bricks-and-mortar startups and virtual ones.

With the former, these lab-based biotechs can spin their wheels inefficiently if they hire too heavily from academia – the “book smart” rather than “research-street smart” folks. It’s easy to keep churning out experiments to “explore” the science – but breaking the prevailing mindset of “writing the Nature paper” versus “making a drug” takes time, and this changes what experiments you do. . .

Bruce took a poll of the R&D folks associated with his own firm's roster of startups, and found that almost all of them were trained at larger companies, which certainly says something. I wonder, though, if this current form of the ecosystem is a bit of an artifact. Times have been so tough the last ten to fifteen years that there may well be a larger proportion of big-company veterans who have made the move to smaller firms, either by choice or out of necessity. (In a similar but even more dramatic example, the vast herds of buffalo and flocks of passenger pigeons described in the 19th century were partly (or maybe largely) due to the disruption of the hunting patterns of the American Indians, who had been displaced and quite literally decimated by disease - see the book 1491 for more on this).

The other side of all this, as mentioned above, is the lack of entry-level drug discovery positions in the bigger companies. Many readers here have mentioned this over the last few years, that the passing on of knowledge and experience from the older researchers to the younger ones has been getting thoroughly disrupted (as the older ones get laid off and the younger ones don't get hired). We don't want to find ourselves in the position of Casey Stengel, looking at his expansion-team Mets and asking "Don't anybody here know how to play this game?"

Booth's post has a few rays of hope near the end - read the whole thing to find them. I continue to think that drug discovery is a valuable enough activity that the incentives will keep it alive in one form or another, but I also realize that that's no guarantee, either. We (and everyone else with a stake in the matter) have to realize that we could indeed screw it up, and that we might be well along the way to doing it.

Comments (15) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Drug Industry History | How To Get a Pharma Job

May 19, 2014

AstraZeneca Looks At Its Own History, And Cringes

Email This Entry

Posted by Derek

While we're talking about AstraZeneca, here's a look at their recent drug development history from the inside. The company had undertaken a complete review of its portfolio and success rates (as well they might, given how things have been going overall).

In this article, we discuss the results of a comprehensive longitudinal review of AstraZeneca's small-molecule drug projects from 2005 to 2010. The analysis allowed us to establish a framework based on the five most important technical determinants of project success and pipeline quality, which we describe as the five 'R's: the right target, the right patient, the right tissue, the right safety and the right commercial potential. A sixth factor — the right culture — is also crucial in encouraging effective decision-making based on these technical determinants. AstraZeneca is currently applying this framework to guide its R&D teams, and although it is too early to demonstrate whether this has improved the company's R&D productivity, we present our data and analysis here in the hope that it may assist the industry overall in addressing this key challenge.

That already gets things off to a bad start, in my opinion, because I really hate those alliterative "Five Whatevers" and "Three Thingies" that companies like to proclaim. And that's not just because Chairman Mao liked that stuff, although that is reason enough to wonder a bit. I think that I suffer from Catchy Slogan Intolerance, a general disinclination to believe that reality can be usefully broken down into discrete actions and principles that just all happen to start with the same letter. I think these catchphrases quantify the unquantifiable and simplify what shouldn't be simplified. The shorter, snappier, and more poster-friendly the list of recommendations, the less chance I think they have of being any actual use. Other than setting people's teeth on edge, which probably isn't the goal.

That said, this article itself does a perfectly good job of laying out many of the things that have been going wrong in the big pharma organizations. See if any of this rings a bell for you:

. . .However, with the development of high-throughput and ultra-high-throughput screening and combinatorial chemistry approaches during the 1980s and 1990s, as well as the perception that a wealth of new targets would emerge from genomics, part of this productivity issue can also be attributed to a shift of R&D organizations towards the 'industrialization' of R&D. The aim was to drive efficiency while retaining quality, but in some organizations this led to the use of quantity-based metrics to drive productivity. The hypothesis was simple: if one drug was launched for every ten candidates entering clinical development, then doubling or tripling the number of candidates entering development should double or triple the number of drugs approved. However, this did not happen; consequently, R&D costs increased while output — as measured by launched drugs — remained static.

This volume-based approach damaged not only the quality and sustainability of R&D pipelines but, more importantly, also the health of the R&D organizations and their underlying scientific curiosity. This is because the focus of scientists and clinicians moved away from the more demanding goal of thoroughly understanding disease pathophysiology and the therapeutic opportunities, and instead moved towards meeting volume-based goals and identifying an unprecedented level of back-up and 'me too' drug candidates. In such an environment, 'truth-seeking' behaviours to understand disease biology may have been over-ridden by 'progression-driven' behaviours that rewarded scientists for meeting numerical volume-based goals.

Thought so. Pause to shiver a bit (that's what I did - it seemed to help). The AZ team looked at everything that had been active during the 2005-2010 period, from early preclinical up to the end of Phase II. What they found, compared to the best figures on industry averages, was that the company looked pretty normal in the preclinical area (as measured by number of projects and their rates of progression, anyway), and that they actually had a higher-than-usual pass rate through Phase I. Phase II, though, was nasty - they had a noticeably higher failure rate, suggesting that too many projects were being allowed to get that far. And although they weren't explicitly looking looking beyond Phase II, the authors do note that AZ's success rate at getting drugs all the way to market was significantly lower than rest of the industry's as well.

The biggest problem seemed to be safety and tox. This led to many outright failures, and to other cases where the human doses ended up limited to non-efficacious levels.

During preclinical testing, 75% of safety closures were compound-related (that is, they were due to 'off-target' or other properties of the compound other than its action at the primary pharmacological target) as opposed to being due to the primary pharmacology of the target. By contrast, the proportion of target-related safety closures rose substantially in the clinical phase and was responsible for almost half of the safety-related project closures. Such failures were often due to a collapse in the predicted margins between efficacious doses and safety outcomes, meaning it was not possible to achieve target engagement or patient benefit without incurring an unacceptable safety risk.

On top of this problem, an unacceptable number of compounds that made it through safety were failing in Phase II though lack of efficacy. There's a good analysis of how this seems to have happened, but a big underlying factor seems to have been the desire to keep progressing compounds to meet various targets. People kept pushing things ahead, because things had to be pushed ahead, and the projects kept being scooting along the ground until they rolled off into one ravine or another.

And I think that everyone with some experience in this business will know exactly what that feels like - this is not some mysterious ailment that infected AstraZeneca, although they seem to have had a more thorough case of it than usual. Taking the time to work out what a safety flag might be telling you, understand tricky details of target engagement, or figure out the right patient population or the right clinical endpoint - these things are not always popular. And to be fair, there are a near-infinite number of reasons to slow a project down (or stop it altogether), and you can't stop all of them. But AZ's experience shows, most painfully, that you can indeed stop too few of them. Here's a particularly alarming example of that:

In our analysis, another example of the impact of volume-based goals could be seen in the strategy used to select back-up drug candidates. Back-up molecules are often developed for important projects where biological confidence is high. They should be structurally diverse to mitigate the risk for the programme against compound-related issues in preclinical or early development, and/or they should confer some substantial advantage over the lead molecule. When used well, this strategy can save time and maintain the momentum of a project. However, with scientists being rewarded for the numbers of candidates coming out of the research organization, we observed multiple projects for which back-up molecules were not structurally diverse or a substantial improvement over the lead molecule. Although all back-up candidates met the chemical criteria for progression into clinical testing, and research teams were considered to have met their volume-based goals, these molecules did not contribute to the de-risking of a programme or increase project success rates. As a consequence, all back-up candidates from a 'compound family' could end up failing for the same reason as the lead compound and indeed had no higher probability of a successful outcome than the original lead molecule (Fig. 6). In one extreme case, we identified a project with seven back-up molecules in the family, all of which were regarded as a successful candidate delivery yet they all failed owing to the same preclinical toxicology finding. This overuse of back-up compounds resulted in a highly disproportionate number of back-up candidates in the portfolio. At the time of writing, approximately 50% of the AstraZeneca portfolio was composed of back-up molecules.

I'm glad this paper exists, since it can serve as a glowing, pulsing bad example to other organizations (which I'm sure was the intention of its author, actually). This is clearly not the way to do things, but it's also easy for a big R&D effort to slip into this sort of behavior, while all the time thinking that it's doing the right things for the right reasons. Stay alert! The lessons are the ones you'd expect:

An underlying theme that ran through the interviews with our project teams was how the need to maintain portfolio volume led to individual and team rewards being tied to project progression rather than 'truth-seeking' behaviour. The scientists and clinicians within the project teams need to believe that their personal success and careers are not intrinsically linked to project progression but to scientific quality, smart risk-taking and good decision-making.

But this is not the low energy state of a big organization. This sort of behavior has to be specifically encouraged and rewarded, or it will disappear, to be replaced by. . .well, you all know what it's replaced by. The sort of stuff detailed in the paper, and possibly even worse. What's frustrating is that none of these are new problems that AZ had to discover. I can bring up my own evidence from twelve years ago, and believe me, I was late to the party complaining about this sort of thing. Don't ever think that it can't happen some more.

Comments (46) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History

May 15, 2014

The Daily Show on Finding New Antibiotics

Email This Entry

Posted by Derek

A reader sent along news of this interview on "The Daily Show" with Martin Blaser of NYU. He has a book out, Missing Microbes, on the overuse of antibiotics and the effects on various microbiomes. And I think he's got a lot of good points - we should only be exerting selection pressure where we have to, not (for example) slapping triclosan on every surface because it somehow makes consumers feel "germ-free". And there are (and always have been) too many antibiotics dispensed for what turn out to be viral infections, for which they will, naturally, do no good at all and probably some harm.

But Dr. Blaser, though an expert on bacteria, does not seem to be an expert on discovering drugs to kill bacteria. I've generated a transcript of part of the interview, starting around the five-minute mark, which went like this:

Stewart: Isn't there some way, that, the antibiotics can be used to kill the strep, but there can be some way of rejuvenating the microbiome that was doing all those other jobs?

Blaser: Well, that's what we need to do. We need to make narrow-spectrum antibiotics. We have broad-spectrum, that attack everything, but we have the science that we could develop narrow-spectrum antibiotics that will just target the one organism - maybe it's strep, maybe it's a different organism - but then we need the diagnostics, so that somebody going to the doctor, they say "You have a virus" "You have a bacteria", if you have a bacteria, which one is it?

Stewart: Now isn't this where the genome-type projects are going? Because finding the genetic makeup of these bacteria, won't that allow us to target these things more specifically?

Blaser Yeah. We have so much genomic information - we can harness that to make better medicine. . .

Stewart: Who would do the thing you're talking about, come up with the targeted - is it drug companies, could it, like, only be done through the CDC, who would do that. . .

Blaser: That's what we need taxes for. That's our tax dollars. Just like when we need taxes to build the road that everybody uses, we need to develop the drugs that our kids and our grandkids are going to use so that these epidemics could be stopped.

Stewart: Let's say, could there be a Manhattan Project, since that's the catch-all for these types of "We're going to put us on the moon" - let's say ten years, is that a realistic goal?

Blaser: I think it is. I think it is. We need both diagnostics, we need narrow-spectrum agents, and we have to change the economic base of how we assess illness in kids and how we treat kids and how we pay doctors. . .

First off, from a drug discovery perspective, a narrow-spectrum antibiotic, one that kills only (say) a particular genus of bacterium, has several big problems: it's even harder to discover than a broader-spectrum agent, its market is much smaller, it's much harder to prescribe usefully, and its lifetime as a drug is shorter. (Other than that, it's fine). The reasons for these are as follows:

Most antibiotic targets are enzyme systems peculiar to bacteria (as compared to eukaryotes like us), but such targets are shared across a lot of bacteria. They tend to be aimed at things like membrane synthesis and integrity (bacterial membranes are rather different than those of animals and plants), or target features of DNA handling that are found in different forms due to bacteria having no nuclei, and so on. Killing bacteria with mechanisms that are also found in human cells is possible, but it's a rough way to go: a drug of that kind would be similar to a classic chemotherapy agent, killing the fast-dividing bacteria (in theory) just before killing the patient.

So finding a Streoptococcus-only drug is a very tall order. You'd have to find some target-based difference between those bacteria and all their close relatives, and I can tell you that we don't know enough about bacterial biochemistry to sort things out quite that well. Stewart brings up genomic efforts, and points to him for it, because that's a completely reasonable suggestion. Unfortunately, it's a reasonable suggestion from about 1996. The first complete bacterial genomes became available in the late 1990s, and have singularly failed to produce any new targeted antibiotics whatsoever. The best reference I can send people to is the GSK "Drugs For Bad Bugs" paper, which shows just what happened (and not just at GSK) to the new frontier of new bacterial targets. Update: see also this excellent overview. A lot of companies tried this, and got nowhere. It did indeed seem possible that sequencing bacteria would give us all sorts of new ways to target them, but that's not how it's worked out in practice. Blaser's interview gives the impression that none of this has happened yet, but believe me, it has.

The market for a narrow-spectrum agent would necessarily be smaller, by design, but the cost of finding it would (as mentioned above) be greater, so the final drug would have to cost a great deal per dose - more than health insurance would want to pay, given the availability of broad-spectrum agents at far lower prices. It could not be prescribed without positively identifying the infectious agent - which adds to the cost of treatment, too. Without faster and more accurate ways to do this (which Blaser rightly notes as something we don't have), the barriers to developing such a drug are even higher.

And the development of resistance would surely take such a drug out of usefulness even faster, since the resistance plasmids would only have to spread between very closely related bacteria, who are swapping genes at great speed. I understand why Blaser (and others) would like to have more targeted agents, so as not to plow up the beneficial microbiome every time a patient is treated, but we'd need a lot of them, and we'd need new ones all the time. This in a world where we can't even seem to discover the standard type of antibiotic.

And not for lack of trying, either. There's a persistent explanation for the state of antibiotic therapy that blames drug companies for supposedly walking away from the field. This has the cause and effect turned around. It's true that some of them have given up working in the area (along with quite a few other areas), but they left because nothing was working. The companies that stayed the course have explored, in great detail and at great expense, the problem that nothing much is working. If there ever was a field of drug discovery where the low-hanging fruit has been picked clean, it is antibiotic research. You have to use binoculars to convince yourself that there's any more fruit up there at all. I wish that weren't so, very much. But it is. Bacteria are hard to kill.

So the talk later on in the interview of spending some tax dollars and getting a bunch of great new antibiotics in ten years is, unfortunately, a happy fantasy. For one thing, getting a single new drug onto the market in only ten years from the starting pistol is very close to impossible, in any therapeutic area. The drug industry would be in much better shape if that weren't so, but here we are. In that section, Jon Stewart actually brings to life one of the reasons I have this blog: he doesn't know where drugs come from, and that's no disgrace, because hardly anyone else knows, either.

Comments (58) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Infectious Diseases

May 7, 2014

The Lessons of Intercept and NASH

Email This Entry

Posted by Derek

Over at LifeSciVC, Tom Hughes has a post about Intercept Pharmaceuticals and their wild ride with an FXR ligand for non-alcoholic steatohepatitis (NASH). Anyone who owned ICPT will recall that period vividly, since the positive news from the clinical trial sent the company's stock from an already-not-cheap $72/share to a where's-the-oxygen-tank peak of $445 in two days of bug-eyed trading. NASH is potentially a big market, and there's really nothing out there to treat it, so Intercept's investors (many of them apparently fresh converts to the cause) decided that the company was set to do very well for itself indeed.

Hughes points out that NASH is not an easy thing to get into, though. It's underserved for a reason - there are a lot of biological and mechanistic question marks, and the clinical trials are no stroll through the tulip beds, either . There's no way to be sure that you're having the desired effects without doing a series of liver biopsies on the patients, and even at that the FDA still has to issue some key regulatory guidance about whether these tissues markers will be sufficient for approval. His thesis is that it took a small company like Intercept to accept all these risks.

He's probably right, especially when you couple them with an uncertain market size - potentially large, but who knows for sure? It's a hard sell at a big company, where the feeling can be that if you're going to go to all that trouble, you might as well aim for a larger market while you're at it, and one with more clarity. Adding a few hundred million (maybe) to the sales figures of Leviathan Pharma might not get upper management all that excited, especially since so much stuff would have to get pioneered along the way. But adding a few hundred million to the revenues of a small company that has nothing else - now that's something worth suiting up for. All the talk about how small companies are so much more nimble and innovative (which is not always true, but certainly not all talk, either) might just come down to the old line about necessity being the mother of invention. Or Samuel Johnson's old line about when a man is to hanged in a fortnight that it concentrates his mind wonderfully - that one, too.

Intercept, for its part, has seen its stock fall back to the lowly $250-300 range, which probably isn't thrilling for the folks who bought in at $445. Actually, it fell back to that range almost immediately, then worked its way back over $400 again, and now back down to this, so there have been plenty of opportunities for hair loss among its investors. When the FDA issues its guidance in the NASH area, it will presumably go through another blast of frenzied trading, and good luck to whoever's in it (long or short) when that happens. And I haven't checked, but I'll bet that the options on this one would cost you plenty, too. From a safe distance, it should be quite a show.

Comments (5) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Development

The Lessons of Intercept and NASH

Email This Entry

Posted by Derek

Over at LifeSciVC, Tom Hughes has a post about Intercept Pharmaceuticals and their wild ride with an FXR ligand for non-alcoholic steatohepatitis (NASH). Anyone who owned ICPT will recall that period vividly, since the positive news from the clinical trial sent the company's stock from an already-not-cheap $72/share to a where's-the-oxygen-tank peak of $445 in two days of bug-eyed trading. NASH is potentially a big market, and there's really nothing out there to treat it, so Intercept's investors (many of them apparently fresh converts to the cause) decided that the company was set to do very well for itself indeed.

Hughes points out that NASH is not an easy thing to get into, though. It's underserved for a reason - there are a lot of biological and mechanistic question marks, and the clinical trials are no stroll through the tulip beds, either . There's no way to be sure that you're having the desired effects without doing a series of liver biopsies on the patients, and even at that the FDA still has to issue some key regulatory guidance about whether these tissues markers will be sufficient for approval. His thesis is that it took a small company like Intercept to accept all these risks.

He's probably right, especially when you couple them with an uncertain market size - potentially large, but who knows for sure? It's a hard sell at a big company, where the feeling can be that if you're going to go to all that trouble, you might as well aim for a larger market while you're at it, and one with more clarity. Adding a few hundred million (maybe) to the sales figures of Leviathan Pharma might not get upper management all that excited, especially since so much stuff would have to get pioneered along the way. But adding a few hundred million to the revenues of a small company that has nothing else - now that's something worth suiting up for. All the talk about how small companies are so much more nimble and innovative (which is not always true, but certainly not all talk, either) might just come down to the old line about necessity being the mother of invention. Or Samuel Johnson's old line about when a man is to hanged in a fortnight that it concentrates his mind wonderfully - that one, too.

Intercept, for its part, has seen its stock fall back to the lowly $250-300 range, which probably isn't thrilling for the folks who bought in at $445. Actually, it fell back to that range almost immediately, then worked its way back over $400 again, and now back down to this, so there have been plenty of opportunities for hair loss among its investors. When the FDA issues its guidance in the NASH area, it will presumably go through another blast of frenzied trading, and good luck to whoever's in it (long or short) when that happens. And I haven't checked, but I'll bet that the options on this one would cost you plenty, too. From a safe distance, it should be quite a show.

Comments (5) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Development

May 6, 2014

Telling the Truth

Email This Entry

Posted by Derek

Here's an editorial by well-known drug discovery guy Mark Murcko, appearing in an unexpected place: a newspaper in Abu Dhabi. It's an edited version of a talk he gave recently at NYU, and he's telling the audience exactly what it's really like:

But the successes are rare, so we have to talk more about our lack of knowledge – our ignorance.

Ignorance afflicts all aspects of medicine. We are starting from a very primitive state. Medicine today is reactive and myopic. We go to the doctor when we are sick (reactive), and the doctor has very few tools with which to study our illness (myopic).

And we simply don’t know enough about how the human body works – it is not clear exactly how most medicines work. Two molecules that differ only by a few atoms may have very different effects in the body. One may be a life-saving drug while the other is a poison.

So what exactly do we mean by “ignorance”? Prof Stuart Firestein of Columbia University points out that “ignorance” is not a criticism; it simply reflects the fact that we scientists, as a community, lack a great deal of fundamental knowledge.

If I had to pick one thing that the general public doesn't realize about drug discovery, or medicine in general, this would probably be the one I'd pick. It's the root cause of our scientific difficulties, and the root of a lot of suspicion and misunderstanding from others. The mass of knowledge we've accumulated about biochemistry and disease looks so huge and imposing, even if you understand a lot of it. Now imagine how it looks if you never cared much for the chemistry and biology that you had in school. So it has to be enough to fix diseases, right? And if there aren't any cures available, it sort of has to be because of laziness, greed, or evil conspiracies, then - right?

So I'm always glad to see talks like this one being given (and Murcko is eminently qualified to do it). It's another reason that I have this blog, and when I hear from people outside of drug discovery (and outside of science in general), telling me that they never knew X or Y or Z, it makes my day.

Comments (11) + TrackBacks (0) | Category: Drug Development

April 7, 2014

Outsourcing Everything

Email This Entry

Posted by Derek

Here's an article in Drug Discovery Today on "virtual pharmaceutical companies", and people who've been around the industry for some years must be stifling yawns already. That idea has been around a long time. The authors here defined a "VPC" as one that has a small managerial core, and outsources almost everything else:

The goal of a VPC is to reach fast proof of concept (PoC) at modest cost, which is enabled by the lack of expensive corporate infrastructure to be used for the project and by foregoing activities, such as synthesis optimization, which are unnecessary for the demonstration of PoC. . .The term ‘virtual’ refers to the business model of such a company based on the managerial core, which coordinates all activities with external providers, and on the lack of internal production or development facilities, rather than to the usage of the internet or electronic communication. Any service provider available on the market can be chosen for a project, because almost no internal investments in fixed assets are made.

And by necessity, such a company lives only to make deals with a bigger (non-virtual) company, one that can actually do the clinical trials, manufacturing, regulatory, sales and so on. There's another necessity - such a company has to get pretty nice chemical matter pretty quickly, it seems to me, in order to have something to develop. The longer you go digging through different chemical series and funny-looking SAR, all while doing it with outsourced chemistry and biology, the worse off you're going to be. If things are straightforward, it could work - but when things are straightforward, a lot of stuff can work. The point of having your own scientists (well, one big point) is for them to be able to react in real time to data and make their own decisions on where to go next. The better outsourcing people can do some of that, too, but their costs are not that big a savings, for that very reason. And it's never going to be as nimble as having your own researchers in-house. (If your own people aren't any more nimble than lower-priced contract workers, you have a different problem).

The people actually doing the managing have to be rather competent, too:

All these points suggest that the know-how and abilities of the members of the core management team are central to the success of a VPC, because they are the only ones with the full in-depth knowledge concerning the project. The managers must have strong industrial and academic networks, be decisive and unafraid to pull the plug on unpromising projects. They further need extensive expertise in drug development and clinical trial conduction, proven leadership and project management skills, entrepreneurial spirit and proficiency in handling suppliers. Of course, the crucial dependency on the skills of every single team member leaves little room for mistakes or incompetency, and the survival of a VPC might be endangered if one of its core members resigns unexpectedly

I think that the authors wanted to say "incompetence" rather than "incompetency" up there, but I believe that they're all native German speakers, so no problem. If that had come from some US-based consultants, I would have put it down to the same mental habit that makes people say "utilized" instead of "used". But the point is a good one: the smaller the organization, the less room there is to hide. A really large company can hol (and indeed, tends to accumulate) plenty of people who need the cover.

The paper goes on to detail several different ways that a VPC can work with a larger company. One of the ones I'm most curious about is the example furnished by Chorus and Eli Lilly. Chorus was founded from within Lilly as a do-everything-by-outsourcing team, and over the yeras, Lilly's made a number of glowing statements about how well they've worked out. I have, of course, no inside knowledge on the subject, but at the same time, many other large companies seem to have passed on the opportunity to do the same thing.

I continue to see the "VPC" model as a real option, but only in special situations. When there's a leg up on the chemistry and/or biology (a program abandoned by a larger company for business reasons, an older compound repurposed), then I think it can work. But trying it completely from the ground up seems problematic to me, but that could be because I've always worked in companies with in-house research. And it's true that even the stuff that's going on right down the hall doesn't work out all that often. One response to that is to say "Well, then, why not do the same thing more cheaply?" But another response is "If the odds are bad with your own people under your own roof, what are they when you contract everything out?"

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Development

March 25, 2014

A New Way to Study Hepatotoxicity

Email This Entry

Posted by Derek

Every medicinal chemist fears and respects the liver. That's where our drugs go to die, or at least to be severely tested by that organ's array of powerful metabolizing enzymes. Getting a read on a drug candidate's hepatic stability is a crucial part of drug development, but there's an ever bigger prize out there: predicting outright liver toxicity. That, when it happens, is very bad news indeed, and can torpedo a clinical compound that seemed to be doing just fine - up until then.

Unfortunately, getting a handle on liver tox has been difficult, even with such strong motivation. It's a tough problem. And given that most drugs are not hepatotoxic, most of the time, any new assay that overpredicts liver tox might be even worse than no assay at all. There's a paper in the latest Nature Biotechnology, though, that looks promising.

What the authors (from Stanford and Toronto) are doing is trying to step back to the early mechanism of liver damage. One hypothesis has been that the production of reactive oxygen species (ROS) inside hepatic cells is the initial signal of trouble. ROS are known to damage biomolecules, of course. But more subtly, they're also known to be involved in a number of pathways used to sense that cellular damage (and in that capacity, seem to be key players in inducing the beneficial effects of exercise, among other things). Aerobic cells have had to deal with the downsides of oxygen for so long that they've learned to make the most of it.
isoniazid%20image.png
This work (building on some previous studies from the same group) uses polymeric nanoparticles. They're semiconductors, and hooked up to be part of a fluorescence or chemiluminescence readout. (They use FRET for peroxynitrite and hypochlorite detection, more indicative of mitochondrial toxicity, and CRET for hydrogen peroxide, more indicative of Phase I metabolic toxicity). The particles are galactosylated to send them towards the liver cells in vivo, confirmed by necropsy and by confocal imaging. The assay system seemed to work well by itself, and in mouse serum, so they dosed it into mice and looked for what happened when the animals were given toxic doses of either acetominophen or isoniazid (both well-known hepatotox compounds at high levels). And it seems to work pretty well - they could image both the fluorescence and the chemiluminescence across a time course, and the dose/responses make sense. It looks like they're picking up nanomolar to micromolar levels of reactive species. They could also show the expected rescue of the acetominophen toxicity with some known agents (like GSH), but could also see differences between them, both in the magnitude of the effects and their time courses as well.

The chemiluminescent detection has been done before, as has the FRET one, but this one seems to be more convenient to dose, and having both ROS detection systems going at once is nice, too. One hopes that this sort of thing really can provide a way to get a solid in vivo read on hepatotoxicity, because we sure need one. Toxicologists tend to be a conservative bunch, with good reason, so don't look for this to revolutionize the field by the end of the year or anything. But there's a lot of promise here.

There are some things to look out for, though. For one, since these are necessarily being done in rodents, there will be differences in metabolism that will have to be taken into account, and some of those can be rather large. Not everything that injures a mouse liver will do so in humans, and vice versa. It's also worth remembering that hepatotoxicity is also a major problem with marketed drugs. That's going to be a much tougher problem to deal with, because some of these cases are due to overdose, some to drug-drug interactions, some to drug-alcohol interactions, and some to factors that no one's been able to pin down. One hopes, though, that if more drugs come through that show a clean liver profile that these problems might ameliorate a bit.

Comments (13) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics | Toxicology

March 20, 2014

Small Molecule Chemistry's "Limited Utility"?

Email This Entry

Posted by Derek

Over at LifeSciVC, guest blogger Jonathan Montagu talks about small molecules in drug discovery, and how we might move beyond them. Many of the themes he hits have come up around here, understandably - figuring why (and how) some huge molecules manage to have good PK properties, exploiting "natural-product-like" chemical space (again, if we can figure out a good way to do that), working with unusual mechanisms (allosteric sites, covalent inhibitors and probes), and so on. Well worth a read, even if he's more sanguine about structure-based drug discovery than I am. Most people are, come to think of it.

His take is very similar to what I've been telling people in my "state of drug discovery" presentations (at Illinois, most recently) - that we medicinal chemists need to stretch our definitions and move into biomolecule/small molecule hybrids and the like. These things need the techniques of organic chemistry, and we should be the people supplying them. Montagu goes even further than I do, saying that ". . .I believe that small molecule chemistry, as traditionally defined and practiced, has limited utility in today’s world." That may or may not be correct at the moment, but I'm willing to bet that it's going to become more and more correct in the future. We should plan accordingly.

Comments (31) + TrackBacks (0) | Category: Chemical Biology | Chemical News | Drug Development | Drug Industry History

March 17, 2014

Predicting What Group to Put On Next

Email This Entry

Posted by Derek

Here's a new paper in J. Med. Chem. on software that tries to implement matched-molecular-pair type analysis. The goal is a recommendation - what R group should I put on next?

Now, any such approach is going to have to deal with this paper from Abbott in 2008. In that one, an analysis of 84,000 compounds across 30 targets strongly suggested that most R-group replacements had, on average, very little effect on potency. That's not to say that they don't or can't affect binding, far from it - just that over a large series, those effects are pretty much a normal distribution centered on zero. There are also analyses that claim the same thing for adding methyl groups - to be sure, there are many dramatic "magic methyl" enhancement examples, but are they balanced out, on the whole, by a similar number of dramatic drop-offs, along with a larger cohort of examples where not much happened at all?

To their credit, the authors of this new paper reference these others right up front. The answer to these earlier papers, most likely, is that when you average across all sorts of binding sites, you're going to see all sorts of effects. For this to work, you've got a far better chance of getting something useful if you're working inside the same target or assay. Here we get to the nuts and bolts:

The predictive method proposed, Matsy, relies on the hypothesis that a particular matched series tends to have a preferred activity order, for example, that not all six possible orders of [Br, Cl, F] are equally frequent. . .Although a rather straightforward idea, we have been unable to find any quantitative analysis of this question in the literature.

So they go on to provide one, with halogen substituents. There's not much to be found comparing pairs of halogen compounds head to head, but when you go to the longer series, you find that the order Br > Cl > F > H is by far the most common (and that appears to be just a good old grease effect). The next most common order just swaps the bromine and chlorine, but the third most common is the original order, in reverse. The other end of the distribution is interesting, too - for example, the least most common order is Br > H > F > Cl, which is believable, since it doesn't make much sense along any property axis.

They go on to do the same sorts of analyses for other matched series, and the question then becomes, if you have such a matched series in your own SAR, what does that order tell you about what to make next? The idea of "SAR transfer" has been explored, and older readers will remember the Topliss tree for picking aromatic substituents (do younger ones?)

The Matsy algorithm may be considered a formalism of aspects of how a medicinal chemist works in practice. Observing a particular trend, a chemist considers what to make next on the basis of chemical intuition, experience with related compounds or targets, and ease of synthesis. The structures suggested by Matsy preserve the core features of molecules while recommending small modifications, a process very much in line with the type of functional group replacement that is common in lead optimization projects. This is in contrast to recommendations from fingerprint-based similarity comparisons where the structural similarity is not always straightforward to rationalize and near-neighbors may look unnatural to a medicinal chemist.

And there's a key point: prediction and recommendation programs walk a fine line, between "There's no way I'm going out of my way to make that" and "I didn't need this program to tell me this". Sometimes there's hardly any space between those two territories at all. Where do this program's recommendations fall? As companies try this out in-house, some people will be finding out. . .

Comments (13) + TrackBacks (0) | Category: Drug Development | In Silico

March 11, 2014

Compassionate Use: An Especially Tough Case

Email This Entry

Posted by Derek

Update: Chimerix says this evening that they will make their drug available to the boy in question as part of a new 20-patient open-label trial, after discussions with the FDA. This might have been the best way out of this, if it gives the company a better regulatory path forward at the same time. My guess, though, is that the company's position was becoming impossible to maintain no matter what.

Many of you will have seen the stories of a dying 7-year-old whose parents are seeking compassionate use access to a drug being developed by Chimerix. It's hard reading for a parent, or for anyone.

But I can do no better than echo John Carroll's editorial here What it comes down to, as far as I can see, is that a company this size will go bankrupt if it tries to deal with all these requests. So under the current system, we have a choice: let small companies try to discover drugs like this, without granting access, or wipe them out by making them grant it. Even for large companies, it's rough, as I wrote about here. I don't have a good solution.

Comments (58) + TrackBacks (0) | Category: Drug Development

February 19, 2014

Ligand Efficiency: A Response to Shultz

Email This Entry

Posted by Derek

I'd like to throw a few more logs on the ligand efficiency fire. Chuck Reynolds of J&J (author of several papers on the subject, as aficionados know) left a comment to an earlier post that I think needs some wider exposure. I've added links to the references:

An article by Shultz was highlighted earlier in this blog and is mentioned again in this post on a recent review of Ligand Efficiency. Shultz’s criticism of LE, and indeed drug discovery “metrics” in general hinges on: (1) a discussion about the psychology of various metrics on scientists' thinking, (2) an assertion that the original definition of ligand efficiency, DeltaG/HA, is somehow flawed mathematically, and (3) counter examples where large ligands have been successfully brought to the clinic.

I will abstain from addressing the first point. With regard to the second, the argument that there is some mathematical rule that precludes dividing a logarithmic quantity by an integer is wrong. LE is simply a ratio of potency per atom. The fact that a log is involved in computing DeltaG, pKi, etc. is immaterial. He makes a more credible point that LE itself is on average non-linear with respect to large differences in HA count. But this is hardly a new observation, since exactly this trend has been discussed in detail by previous published studies (here, here, here, and here). It is, of course, true that if one goes to very low numbers of heavy atoms the classical definition of LE gets large, but as a practical matter medicinal chemists have little interest in extremely small fragments, and the mathematical catastrophe he warns us against only occurs when the number of heavy atoms goes to zero (with a zero in the denominator it makes no difference if there is a log in the numerator). Why would HA=0 ever be relevant to a med. chem. program? In any case a figure essentially equivalent to the prominently featured Figure 1a in the Shultz manuscript appears in all of the four papers listed above. You just need to know they exist.

With regard to the third argument, yes of course there are examples of drugs that defy one or more of the common guidelines (e.g MW). This seems to be a general problem of the community taking metrics and somehow turning them into “rules.” They are just helpful, hopefully, guideposts to be used as the situation and an organization’s appetite for risk dictate. One can only throw the concept of ligand efficiency out the window completely if you disagree with the general principle that it is better to design ligands where the atoms all, as much as possible, contribute to that molecule being a drug (e.g. potency, solubility, transport, tox, etc.). The fact that there are multiple LE schemes in the literature is just a natural consequence of ongoing efforts to refine, improve, and better apply a concept that most would agree is fundamental to successful drug discovery.

Well, as far as the math goes, dividing a log by an integer is not any sort of invalid operation. I believe that [log(x)]/y is the same as saying log(x to the one over y). That is, log(16) divided by 2 is the same as the log of 16 to the one-half power, or log(4). They both come out to about 0.602. Taking a BEI calculation as real chemistry example, a one-micromolar compound that weighs 250 would, by the usual definition, -log(Ki)/(MW/1000), have a BEI of 6/0.25, or 24. By the above rule, if you want to keep everything inside the log function, then say -log(0.0000001) divided by 0.25, that one-micromolar figure should be raised to the fourth power, then you take the log of the result (and flip the sign). One-millionth to the fourth power is one times ten to the minus twenty-fourth, so that gives you. . .24. No problem.

Shultz's objection that LE is not linear per heavy atom, though, is certainly valid, as Reynolds notes above as well. You have to realize this and bear it in mind while you're thinking about the topic. I think that one of the biggest problems with these metrics - and here's a point that both Reynolds and Shultz can agree on, I'll bet - is that they're tossed around too freely by people who would like to use them as a substitute for thought in the first place.

Comments (19) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico

February 11, 2014

Drug Discovery in India

Email This Entry

Posted by Derek

Molecular biologist Swapnika Ramu, a reader from India, sends along a worthwhile (and tough) question. She says that after her PhD (done in the US), her return to India has made her "less than optimistic" about the current state of drug discovery there. (Links in the below quote have been added by me, not her:

Firstly, there isn't much by way of new drug development in India. Secondly, as you have discussed many times on your blog. . .drug pricing in India remains highly contentious, especially with the recent patent disputes. Much of the public discourse descends into anti-big pharma rhetoric, and there is little to no reasoned debate about how such issues should be resolved. . .

I would like to hear your opinion on what model of drug discovery you think a developing nation like India should adopt, given the constraints of finance and a limited talent pool. Target-based drug discovery was the approach that my previous company adopted, and not surprisingly this turned out to be a very expensive strategy that ultimately offered very limited success. Clearly, India cannot keep depending upon Western pharma companies to do all the heavy lifting when it comes to developing new drugs, simply to produce generic versions for the Indian public. The fact that several patents are being challenged in Indian courts would make pharma skittish about the Indian market, which is even more of a concern if we do not have a strong drug discovery ecosystem of our own. Since there isn't a robust VC-based funding mechanism, what do you think would be a good approach to spurring innovative drug discovery in the Indian context?

Well, that is a hard one. My own opinion is that India only has a limited talent pool as compared to Western Europe or the US - the country still has a lot more trained chemists and biologists than most other places. It's true, though, that the numbers don't tell the story very well. The best people from India are very, very good, but there are (from what I can see) a lot of poorly trained ones with degrees that seem (at least to me) worth very little. Still, you've still got a really substantial number of real scientists, and I've no doubt that India could have several discovery-driven drug companies if the financing were easier to come by (and the IP situation a bit less murky - those two factors are surely related). Whether it would have those, or even should, is another question.

As has been clear for a while, the Big Pharma model has its problems. Several players are in danger of falling out of the ranks (Lilly, AstraZeneca), and I don't really see anyone rising up to replace them. The companies that have grown to that size in the last thirty years mostly seem to be biotech-driven (Amgen, Biogen, Genentech as was, etc.)

So is that the answer? Should Indian companies try to work more in that direction than in small molecule drugs? Problem is, the barriers to entry in biotech-derived drugs are higher, and that strategy perhaps plays less to the country's traditional strengths in chemistry. But in the same way that even less-developed countries are trying to skip over the landline era of telephones and go straight to wireless, maybe India should try skipping over small molecules. I do hate to write that, but it's not a completely crazy suggestion.

But biomolecule or small organic, to get a lot of small companies going in India (and you would need a lot, given the odds) you would need a VC culture, which isn't there yet. The alternative (and it's doubtless a real temptation for some officials) would be for the government to get involved to try to start something, but I would have very low hopes for that, especially given the well-known inefficiencies of the Indian bureaucracy.

Overall, I'm not sure if there's a way for most countries not to rely on foreign companies for most (or all) of the new drugs that come along. Honestly, the US is the only country in the world that might be able to get along with only its own home-discovered pharmacopeia, and it would still be a terrible strain to lose the European (and Japanese) discoveries. Even the likes of Japan, Switzerland, and Germany use, for the most part, drugs that were discovered outside their own countries.

And in the bigger picture, we might be looking at a good old Adam Smith-style case of comparative advantage. It sure isn't cheap to discover a new drug in Boston, San Francisco, Basel, etc., but compared to the expense of getting pharma research in Hyderabad up to speed, maybe it's not quite as bad as it looks. In the longer term, I think that India, China, and a few other countries will end up with more totally R&D-driven biomedical research companies of their own, because the opportunities are still coming along, discoveries are still being made, and there are entrepreneurial types who may well feel like taking their chances on them. But it could take a long longer than some people would like, particularly researchers (like Swapnika Ramu) who are there right now. The best hope I can offer is that Indian entrepreneurs should keep their eyes out for technologies and markets that are new enough (and unexplored enough) so that they're competing on a more level playing field. Trying to build your own Pfizer is a bad idea - heck, the people who built Pfizer seem to be experiencing buyer's remorse themselves.

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

February 5, 2014

More Ligand Efficiency

Email This Entry

Posted by Derek

Here's a new review on ligand efficiency metrics in drug discovery. It references the papers that Michael Shultz has written on this topic, but (as far as I can tell) doesn't directly address his criticisms.
LE%20LLE.png
There's a lot of data in this paper, and it's worth reading for its discussion of ligand binding thermodynamics, even if you're not sure what you think about ligand efficiency. But if you are thinking about it (and I'd especially recommend thinking about LipE/LLE), then here's a chart to give you an idea of where you stand. It shows the mean LE and LLE values for a large range of compounds against 329 targets, and may give you something to shoot for. The LLE of carbonic anhydrase inhibitors is hard to beat (that sulfonamide binding to zinc does it), but, then, you're probably not targeting carbonic anhydrase, anyway.

Comments (9) + TrackBacks (0) | Category: Drug Development

January 22, 2014

A New Book on Scaffold Hopping

Email This Entry

Posted by Derek

I've been sent a copy of Scaffold Hopping in Medicinal Chemistry, a new volume from Wiley, edited by Nathan Brown of the Institute of Cancer Research in London. There are eighteen chapters - five on identifying and characterizing scaffolds to start with, ten on various computational approaches to scaffold-hopping, and three case histories.

One of the things you realize quickly when you starting thinking about (or reading about) that topic is that scaffolds are in the eye of the beholder, and that's what those first chapters are trying to come to grips with. Figuring out the "maximum common substructure" of a large group of analogs, for example, is not an easy problem at all, certainly not by eyeballing, and not through computational means, either (it's not solvable in polynomial time, if we want to get formal about it). One chemist will look at a pile of compounds and say "Oh yeah, the isoxazoles from Project XYZ", while someone who hasn't seen them before might say "Hmm, a bunch of amide heterocycles" or "A bunch of heterobiaryls" or what have you.

Another big question is how far you have to move in order to qualify as having hopped to another scaffold. My own preference is strictly empirical: if you've made a change that would be big enough to make most people draw a new Markush structure compared to your current series, you've scaffold-hopped. Ideally, you've kept the activity at your primary target, but changed it in the counterscreens or changed the ADMET properties. That's not to say that all these changes are going to be beneficial - people try this sort of thing all the time and wipe out the primary activity, or pick up even more clearance or hERG than the original series had. But those are the breaks.

And those are the main reasons that people do this sort of thing: to work out of a patent corner, to fix selectivity, or to get better properties. The appeal is that you might be able to address these without jettisoning everything you learned about the SAR of the previous compounds. If this is a topic of interest, especially from the computational angles, this book is certainly worth a look.

Comments (1) + TrackBacks (0) | Category: Drug Development | Patents and IP | Pharmacokinetics

January 13, 2014

Alnylam Makes It (As Does RNAi?)

Email This Entry

Posted by Derek

I've written about Alnylam, one of the flagship RNA interference companies, a few times around here. A couple of years ago, I was wondering if they'd win the race to come up with results that would keep the doors open.

Well, if you haven't been keeping up with the news in this space, they made it. Sanofi has just bought a large stake in the company, on the strength of the recent clinical results with patisiran, an RNAi therapy for the rare disease transthyretin-mediated amyloidosis (ATTR). Alnylam has a lot on their schedule these days, and the Sanofi deal will provide a big boost towards getting clinical data on all these ideas. Congratulations to them, and to RNAi in general, which has had a lengthy (and often overhyped) growth phase, and now might be starting to realize its promise.

Update: more on the story here.

Comments (7) + TrackBacks (0) | Category: Business and Markets | Drug Development

January 10, 2014

A New Look At Clinical Attrition

Email This Entry

Posted by Derek

Thanks to this new article in Nature Biotechnology, we have recent data on the failure rates in drug discovery. Unfortunately, this means that we have recent data on the failure rates in drug discovery, and the news is not good.

The study is the largest and most recent of its kind, examining success rates of 835 drug developers, including biotech companies as well as specialty and large pharmaceutical firms from 2003 to 2011. Success rates for over 7,300 independent drug development paths are analyzed by clinical phase, molecule type, disease area and lead versus nonlead indication status. . .Unlike many previous studies that reported clinical development success rates for large pharmaceutical companies, this study provides a benchmark for the broader drug development industry by including small public and private biotech companies and specialty pharmaceutical firms. The aim is to incorporate data from a wider range of clinical development organizations, as well as drug modalities and targets. . .

To illustrate the importance of using all indications to determine success rates, consider this scenario. An antibody is developed in four cancer indications, and all four indications transition successfully from phase 1 to phase 3, but three fail in phase 3 and only one succeeds in gaining FDA approval. Many prior studies reported this as 100% success, whereas our study differentiates the results as 25% success for all indications, and 100% success for the lead indication. Considering the cost and time spent on the three failed phase 3 indications, we believe including all 'development paths' more accurately reflects success and R&D productivity in drug development.

So what do they find? 10% of all indications in Phase I eventually make it through the FDA, which is in line with what most people think. Failure rates are in the thirty-percent range in Phase I, in the 60-percent range in Phase II, thirty to forty percent in Phase III, and in the teens at the NDA-to-approval stage. Broken out by drug class (antibody, peptide, small molecule, vaccine, etc.), the class with the most brutal attrition is (you guessed it) small molecules: slightly over 92% of them entering Phase I did not make it to approval.

If you look at things by therapeutic area, oncology has the roughest row to hoe with over 93% failure. Its failure rate is still over 50% in Phase III, which is particularly hair-raising. Infectious disease, at the other end of the scale, is merely a bit over 83%. Phase II is where the different diseases really separate out by chance of success, which makes sense.

Overall, this is a somewhat gloomier picture than we had before, and the authors have reasonable explanations for it:

Factors contributing to lower success rates found in this study include the large number of small biotech companies represented in the data, more recent time frame (2003–2011) and higher regulatory hurdles for new drugs. Small biotech companies tend to develop riskier, less validated drug classes and targets, and are more likely to have less experienced development teams and fewer resources than large pharmaceutical corporations. The past nine-year period has been a time of increased clinical trial cost and complexity for all drug development sponsors, and this likely contributes to the lower success rates than previous periods. In addition, an increasing number of diseases have higher scientific and regulatory hurdles as the standard of care has improved over the past decade.

So there we have it - if anyone wants numbers, these are the numbers. The questions are still out there for all of us, though: how sustainable is a business with these kinds of failure rates? How feasible are the pricing strategies that can accommodate them? And what will break out out of this system, anyway?

Comments (12) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History

January 9, 2014

Three Options With Five Billion to Spend

Email This Entry

Posted by Derek

Here's a discussion on a tricky question:

If you had $5 billion to invest, which is the current average R&D spend required to develop and launch just one new drug (without any guarantee of reimbursement and market success), how would you invest it:

1. Directly into your internal R&D pipeline, based on established approaches to drug discovery and development which are currently giving an ROI of only about 5% (and rapidly declining)?

2. Acquiring new product candidates externally, at full market price in an increasingly competitive environment?

3. Building a portfolio of say 50 independent projects to explore completely new and different approaches to drug discovery & development, each with a 2% probability of doubling your ROI indefinitely into the future?

Those alternatives are not exactly phrased in a neutral manner, but they're hard to be neutral about. My take on them is that option (1) is the one that's least likely to get you removed by your board of directors, because it spreads the blame around. Option (2) would be insane, if it were the only thing you did, but perfectly reasonable as a complement to either (1) or (3). And option (3), well. . .the problem with that one is finding 50 "completely new and different" approaches to drug R&D. I don't think that there are that many, honestly. And I also doubt (very strongly) if they all have as much as a 2% chance of succeeding.

So if I were a CEO, and God forbid, I would do enough (1) to buy me some cover, enough (2) to try to keep the Street happy, and spend whatever I had left on (3). I would not, of course, phrase my decisions in those terms. Sound good?

Comments (41) + TrackBacks (0) | Category: Drug Development

January 6, 2014

Positive Rules and Negative Ones

Email This Entry

Posted by Derek

I enjoyed this take on med-chem, and I think he's right:

There are a large set of "don't do this". When they predict failure, you usually shouldn't go there as these rules are moderately reliable.

There is an equally large set of "when you encounter this situation, try this" rules. Their positive predictive power is very very low.

Even the negative rule, the what-to-avoid category, aren't as hard as fast as one would like. There are some pretty unlikely-looking drugs out there (fosfomycin, nitroglycerine, suramin, and see that link above for more). These structures aren't telling you to go out and immediately start imitating them, but what they are telling you is that things that you'd throw away can work.

But those rules are still right more often than the "Here's what to do when . . ." ones, as John Alan Tucker is saying. Every experienced medicinal chemist has a head full of these things - reduce basicity to get out of hERG problems, change the logP for blood-brain-barrier penetration, substitute next to a phenol to slow glucuronidation, switch tetrazole/COOH, make a prodrug, change the salt, and on and on. These work, sometimes, but you have to try them every time before moving on to anything more exotic.

And it's the not-always-right nature of the negative rules, coupled with the not-completely-useless nature of the positive ones, that gives everyone room to argue. Someone has always tried XYZ that worked, while someone else has always tried XYZ when it didn't do a thing. Pretty much any time you try to lay down the law about structures that should or shouldn't be made, you can find arguments on the other side. The rule-of-five type guidelines look rather weak when you think about all the exceptions to them, but they look pretty strong when you compare them to all the other rules that people have tried, and so on.

In the end, all we can do is narrow our options down from an impossible number to a highly improbable number. When (or if) we can do better, medicinal chemistry will change a great deal, but until then. . .

Comments (8) + TrackBacks (0) | Category: Drug Development | Life in the Drug Labs

December 3, 2013

Merck's Drug Development in The New Yorker

Email This Entry

Posted by Derek

The New Yorker has an article about Merck's discovery and development of suvorexant, their orexin inhibitor for insomnia. It also goes into the (not completely reassuring) history of zolpidem (known under the brand name of Ambien), which is the main (and generic) competitor for any new sleep drug.

The piece is pretty accurate about drug research, I have to say:

John Renger, the Merck neuroscientist, has a homemade, mocked-up advertisement for suvorexant pinned to the wall outside his ground-floor office, on a Merck campus in West Point, Pennsylvania. A woman in a darkened room looks unhappily at an alarm clock. It’s 4 a.m. The ad reads, “Restoring Balance.”

The shelves of Renger’s office are filled with small glass trophies. At Merck, these are handed out when chemicals in drug development hit various points on the path to market: they’re celebrations in the face of likely failure. Renger showed me one. Engraved “MK-4305 PCC 2006,” it commemorated the day, seven years ago, when a promising compound was honored with an MK code; it had been cleared for testing on humans. Two years later, MK-4305 became suvorexant. If suvorexant reaches pharmacies, it will have been renamed again—perhaps with three soothing syllables (Valium, Halcion, Ambien).

“We fail so often, even the milestones count for us,” Renger said, laughing. “Think of the number of people who work in the industry. How many get to develop a drug that goes all the way? Probably fewer than ten per cent.”

I well recall when my last company closed up shop - people in one wing were taking those things and lining them up out on a window shelf in the hallway, trying to see how far they could make them reach. Admittedly, they bulked out the lineup with Employee Recognition Awards and Extra Teamwork awards, but there were plenty of oddly shaped clear resin thingies out there, too.

The article also has a good short history of orexin drug development, and it happens just the way I remember it - first, a potential obesity therapy, then sleep disorders (after it was discovered that a strain of narcoleptic dogs lacked functional orexin receptors).

Mignot recently recalled a videoconference that he had with Merck scientists in 1999, a day or two before he published a paper on narcoleptic dogs. (He has never worked for Merck, but at that point he was contemplating a commercial partnership.) When he shared his results, it created an instant commotion, as if he’d “put a foot into an ants’ nest.” Not long afterward, Mignot and his team reported that narcoleptic humans lacked not orexin receptors, like dogs, but orexin itself. In narcoleptic humans, the cells that produce orexin have been destroyed, probably because of an autoimmune response.

Orexin seemed to be essential for fending off sleep, and this changed how one might think of sleep. We know why we eat, drink, and breathe—to keep the internal state of the body adjusted. But sleep is a scientific puzzle. It may enable next-day activity, but that doesn’t explain why rats deprived of sleep don’t just tire; they die, within a couple of weeks. Orexin seemed to turn notions of sleep and arousal upside down. If orexin turns on a light in the brain, then perhaps one could think of dark as the brain’s natural state. “What is sleep?” might be a less profitable question than “What is awake?”

There's also a lot of good coverage of the drug's passage through the FDA, particularly the hearing where the agency and Merck argued about the dose. (The FDA was inclined towards a lower 10-mg tablet, but Merck feared that this wouldn't be enough to be effective in enough patients, and had no desire to launch a drug that would get the reputation of not doing very much).

few weeks later, the F.D.A. wrote to Merck. The letter encouraged the company to revise its application, making ten milligrams the drug’s starting dose. Merck could also include doses of fifteen and twenty milligrams, for people who tried the starting dose and found it unhelpful. This summer, Rick Derrickson designed a ten-milligram tablet: small, round, and green. Several hundred of these tablets now sit on shelves, in rooms set at various temperatures and humidity levels; the tablets are regularly inspected for signs of disintegration.

The F.D.A.’s decision left Merck facing an unusual challenge. In the Phase II trial, this dose of suvorexant had helped to turn off the orexin system in the brains of insomniacs, and it had extended sleep, but its impact didn’t register with users. It worked, but who would notice? Still, suvorexant had a good story—the brain was being targeted in a genuinely innovative way—and pharmaceutical companies are very skilled at selling stories.

Merck has told investors that it intends to seek approval for the new doses next year. I recently asked John Renger how everyday insomniacs would respond to ten milligrams of suvorexant. He responded, “This is a great question.”

There are, naturally, a few shots at the drug industry throughout the article. But it's not like our industry doesn't deserve a few now and then. Overall, it's a good writeup, I'd say, and gets across the later stages of drug development pretty well. The earlier stages are glossed over a bit, by comparison. If the New Yorker would like for me to tell them about those parts sometime, I'm game.

Comments (28) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System

November 25, 2013

Lipinski's Anchor

Email This Entry

Posted by Derek

Michael Shultz of Novartis is back with more thoughts on how we assign numbers to drug candidates. Previously, he's written about the mathematical wrongness of many of the favorite metrics (such as ligand efficiency), in a paper that stirred up plenty of comment.

His new piece in ACS Medicinal Chemistry Letters is well worth a look, although I confess that (for me) it seemed to end just when it was getting started. But that's the limitation of a Viewpoint article for a subject with this much detail in it.

Shultz makes some very good points by referring to Daniel Kahneman's Thinking, Fast and Slow, a book that's come up several times around here as well (in both posts and comments). The key concept here is called "attribute substitution", which is the mental process by which we take a complex situation, which we find mentally unworkable, and try to substitute some other scheme which we can deal with. We then convince ourselves, often quickly, silently, and without realizing that we're doing it, that we now have a handle on the situation, just because we now have something in our heads that is more understandable. That "Ah, now I get it" feeling is often a sign that you're making headway on some tough subject, but you can also get it when you're understanding something that doesn't help you with it at all.

And I'd say that this is the take-home for this whole Viewpoint article, that we medicinal chemists are fooling ourselves when we use ligand efficiency and similar metrics to try to understand what's going on with our drug candidates. Shultz go on to discuss what he calls "Lipinski's Anchor". Anchoring is another concept out of Thinking Fast and Slow, and here's the application:

The authors of the ‘rules of 5’ were keenly aware of their target audience (medicinal chemists) and “deliberately excluded equations and regression coefficients...at the expense of a loss of detail.” One of the greatest misinterpretations of this paper was that these alerts were for drug-likeness. The authors examined the World Drug Index (WDI) and applied several filters to identify 2245 drugs that had at least entered phase II clinical development. Applying a roughly 90% cutoff for property distribution, the authors identified four parameters (MW, logP, hydrogen bond donors, and hydrogen bond acceptors) that were hypothesized to influence solubility and permeability based on their difference from the remainder of the WDI. When judging probability, people rely on representativeness heuristics (a description that sounds highly plausible), while base-rate frequency is often ignored. When proposing oral drug-like properties, the Gaussian distribution of properties was believed, de facto, to represent the ability to achieve oral bioavailability. An anchoring effect is when a number is considered before estimating an unknown value and the original number significantly influences future estimates. When a simple, specific, and plausible MW of 500 was given as cutoff for oral drugs, this became the mother of all medicinal chemistry anchors.

But how valid are molecular weight cutoffs, anyway? That's a topic that's come up around here a few times, too, as well it should. Comparisons of the properties of orally available drugs across their various stages of development seem to suggest that such measurements converge on what we feel are the "right" values, but as Shultz points out, there could be other reasons for the data to look that way. And he makes this recommendation: "Since the average MW of approved oral drugs has been increasing while the failure rate due to PK/biovailability has been decreasing, the hypothesis linking size and bioavailability should be reconsidered."

I particularly like another line, which could probably serve as the take-home message for the whole piece: "A clear understanding of probabilities in drug discovery is impossible due to the large number of known and unknown variables." I agree. And I think that's the root of the problem, because a lot of people are very, very uncomfortable with that kind of talk. The more business-school training they have, the less they like the sound of it. The feeling is that if we'd just use modern management techniques, it wouldn't have to be this way. Closer to the science end of things, the feeling is that if we'd just apply the right metrics to our work, it wouldn't have to be that way, either. Are both of these mindsets just examples of attribute substitution at work?

In the past, I've said many times that if I had to work from a million compounds that were within rule-of-five cutoffs versus a million that weren't, I'd go for the former every time. And I'm still not ready to ditch that bias, but I'm certainly ready to start running up the Jolly Roger about things like molecular weight. I still think that the clinical failure rate is higher for significantly greasier compounds (both because of PK issues and because of unexpected tox). But molecular weight might not be much of a proxy for the things we care about.

This post is long enough already, so I'll address Shultz's latest thoughts on ligand efficiency in another entry. For those who want more 50,000-foot viewpoints on these issues, though, these older posts will have plenty.

Comments (44) + TrackBacks (0) | Category: Drug Development | Drug Industry History

November 12, 2013

Leaving Antibiotics: An Interview

Email This Entry

Posted by Derek

Here's the (edited) transcript of an interview that Pfizer's VP of clinical research, Charles Knirsch, gave to PBS's Frontline program. The subject was the rise of resistant bacteria - which is a therapeutic area that Pfizer is no longer active in.

And that's the subject of the interview, or one of its main subjects. I get the impression that the interviewer would very much like to tell a story about how big companies walked away to let people die because they couldn't make enough money off of them:

. . .If you look at the course of a therapeutic to treat pneumonia, OK, … we make something, a macrolide, that does that. It’s now generic, and probably the whole course of therapy could cost $30 or $35. Even when it was a branded antibiotic, it may have been a little bit more than that.

So to cure pneumonia, which in some patient populations, particularly the elderly, has a high mortality, that’s what people are willing to pay for a therapeutic. I think that there are differences across different therapeutic areas, but for some reason, with antibacterials in particular, I think that society doesn’t realize the true value.

And did it become incumbent upon you at some point to make choices about which things would be in your portfolio based on this?

Based on our scientific capabilities and the prudent allocation of capital, we do make these choices across the whole portfolio, not just with antibacterials.

But talk to me about the decision that went into antibacterials. Pfizer made a decision in 2011 and announced the decision. Obviously you were making choices among priorities. You had to answer to your shareholders, as you’ve explained, and you shifted. What went into that decision?

I think that clearly our vaccine platforms are state of the art. Our leadership of the vaccine group are some of the best people in the industry or even across the industry or anywhere really. We believe that we have a higher degree of success in those candidates and programs that we are currently prosecuting.

So it’s a portfolio management decision, and if our vaccine for Clostridium difficile —

A bacteria.

Yeah, a bacteria which is a major cause of both morbidity and mortality of patients in hospitals, the type of thing that I would have been consulted on as an infectious disease physician, that in fact we will prevent that, and we’ll have a huge impact on human health in the hospitals.

But did that mean that you had to close down the antibiotic thing to focus on vaccines? Why couldn’t you do both?

Oh, good question. And it’s not a matter of closing down antibiotics. We were having limited success. We had had antibiotics that we would get pretty far along, and a toxicity would emerge either before we even went into human testing or actually in human testing that would lead to discontinuation of those programs. . .

It's that last part that I think is insufficiently appreciated. Several large companies have left the antibiotic field over the years, but several stayed (GlaxoSmithKline and AstraZeneca come to mind). But the ones who stayed were not exactly rewarded for their efforts. Antibacterial drug discovery, even if you pour a lot of money and effort into it, is very painful. And if you're hoping to introduce a mechanism of action into the field, good luck. It's not impossible, but if it were easy to do, more small companies would have rushed in to do it.

Knirsch doesn't have an enviable task here, because the interviewer pushes him pretty hard. Falling back on the phrase "portfolio management decisions" doesn't help much, though:

In our discussion today, I get the sense that you have to make some very ruthless decisions about where to put the company’s capital, about where to invest, about where to put your emphasis. And there are whole areas where you don’t invest, and I guess the question we’re asking is, do you learn lessons about that? When you pulled out of Gram-negative research like that and shifted to vaccines, do you look back on that and say, “We learned something about this”?

These are not ruthless decisions. These are portfolio decisions about how we can serve medical need in the best way. …We want to stay in the business of providing new therapeutics for the future. Our investors require that of us, I think society wants a Pfizer to be doing what we do in 20 years. We make portfolio management decisions.

But you didn’t stay in this field, right? In Gram negatives you didn’t really stay in that field. You told me you shifted to a new approach.

We were not having scientific success, there was no clear regulatory pathway forward, and the return on any innovation did not appear to be something that would support that program going forward.

Introducing the word "ruthless" was a foul, and I'm glad the whistle was blown. I might have been tempted to ask the interviewer what it meant, ruthless, and see where that discussion went. But someone who gives in to temptations like that probably won't make VP at Pfizer.

Comments (51) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Infectious Diseases

November 11, 2013

The Past Twenty Years of Drug Development, Via the Literature

Email This Entry

Posted by Derek

Here's a new paper in PlOSOne on drug development over the past 20 years. The authors are using a large database of patents and open literature publications, and trying to draw connections between those two, and between individual drug targets and the number of compounds that have been disclosed against them. Their explanation of patents and publications is a good one:

. . .We have been unable to find any formal description of the information flow between these two document types but it can be briefly described as follows. Drug discovery project teams typically apply for patents to claim and protect the chemical space around their lead series from which clinical development candidates may be chosen. This sets the minimum time between the generation of data and its disclosure to 18 months. In practice, this is usually extended, not only by the time necessary for collating the data and drafting the application but also where strategic choices may be made to file later in the development cycle to maximise the patent term. It is also common to file separate applications for each distinct chemical series the team is progressing.

While some drug discovery operations may eschew non-patent disclosure entirely, it is nevertheless common practice (and has business advantages) for project teams to submit papers to journals that include some of the same structures and data from their patents. While the criteria for inventorship are different than for authorship, there are typically team members in-common between the two types of attribution. Journal publications may or may not identify the lead compound by linking the structure to a code name, depending on how far this may have progressed as a clinical candidate.

The time lag can vary between submitting manuscripts immediately after filing, waiting until the application has published, deferring publication until a project has been discontinued, or the code name may never be publically resolvable to a structure. A recent comparison showed that 6% of compound structures exemplified in patents were also published in journal articles. While the patterns described above will be typical for pharmaceutical and biotechnology companies, the situation in the academic sector differs in a number of respects. Universities and research institutions are publishing increasing numbers of patents for bioactive compounds but their embargo times for publication and/or upload of screening results to open repositories, such as PubChem BioAssay, are generally shorter.

There are also a couple of important factors to keep in mind during the rest of the analysis. The authors point out that their database includes a substantial number of "compounds" which are not small, drug-like molecules (these are antibodies, proteins, large natural products, and so on). (In total, from 1991 to 2010 they have about one million compounds from journal articles and nearly three million from patents). And on the "target" side of the database, there are a significant number of counterscreens included which are not drug targets as such, so it might be better to call the whole thing a compound-to-protein mapping exercise. That said, what did they find?
compounds%20targets%20year%20chart.png
Here's the chart of compounds/target, by year. The peak and decline around 2005 is quite noticeable, and is corroborated by a search through the PCT patent database, which shows a plateau in pharmaceutical patents around this time (which has continued until now, by the way).

Looking at the target side of things, with those warnings above kept in mind, shows a different picture. The journal-publication side of things really has shown an increase over the last ten years, with an apparent inflection point in the early 2000s. What happened? I'd be very surprised if the answer didn't turn out to be genomics. If you want to see the most proximal effect of the human genomics frenzy from around that time, there you have it in the way that curve bends around 2001. Year-on-year, though (see the full paper for that chart), the targets mentioned in journal publications seem to have peaked in 2008 or so, and have either plateaued or actually started to come back down since then. Update: Fixed the second chart, which had been a duplicate of the first).
targets%20source%20year.png
The authors go on to track a number of individual targets by their mentions in patents and journals, and you can certainly see a lot of rise-and-fall stories over the last 20 years. Those actual years should not be over-interpreted, though, because of the delays (mentioned above) in patenting, and the even longer delays, in some cases, for journal publication from inside pharma organizations.

So what's going on with the apparent decline in output? The authors have some ideas, as do (I'm sure) readers of this site. Some of those ideas probably overlap pretty well:

While consideration of all possible causative factors is outside the scope of this work it could be speculated that the dominant causal effect on global output is mergers and acquisition activity (M&A) among pharmaceutical companies. The consequences of this include target portfolio consolidations and the combining of screening collections. This also reduces the number of large units competing in the production of medicinal chemistry IP. A second related factor is less scientists engaged in generating output. Support for the former is provided by the deduction that NME output is directly related to the number of companies and for the latter, a report that US pharmaceutical companies are estimated to have lost 300,000 jobs since 2000. There are other plausible contributory factors where finding corroborative data is difficult but nonetheless deserve comment. Firstly, patent filing and maintenance costs will have risen at approximately the same rate as compound numbers. Therefore part of the decrease could simply be due to companies, quasi-synchronously, reducing their applications to control costs. While this happened for novel sequence filings over the period of 1995–2000, we are neither aware any of data source against which this hypothesis could be explicitly tested for chemical patenting nor any reports that might support it. Similarly, it is difficult to test the hypothesis of resource switching from “R” to “D” as a response to declining NCE approvals. Our data certainly infer the shrinking of “R” but there are no obvious metrics delineating a concomitant expansion of “D”. A third possible factor, a shift in the small-molecule:biologicals ratio in favour of the latter is supported by declared development portfolio changes in recent years but, here again, proving a causative coupling is difficult.

Causality is a real problem in big retrospectives like this. The authors, as you see, are appropriately cautious. (They also mention, as a good example, that a decline in compounds aimed at a particular target can be a signal of both success and of failure). But I'm glad that they've made the effort here. It looks like they're now analyzing the characteristics of the reported compounds with time and by target, and I look forward to seeing the results of that work.

Update: here's a lead author of the paper with more in a blog post.

Comments (22) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Patents and IP | The Scientific Literature

November 8, 2013

Exiting Two Therapeutic Areas

Email This Entry

Posted by Derek

So Bristol-Myers Squibb did indeed re-org itself yesterday, with the loss of about 75 jobs (and the shifting around of 300 more, which will probably result in some job losses as well, since not everyone is going to be able to do that). And they announced that they're getting out of two therapeutic areas, diabetes and neuroscience.

Those would be for very different reasons. Neuro is famously difficult and specialized. There are huge opportunities there, but they're opportunities because no one's been able to do much with them, for a lot of good reasons. Some of the biggest tar pits of drug discovery are to be found there (Alzheimer's, chronic pain), and even the diseases for which we have some treatments are near-total black boxes, mechanistically (schizophrenia, epilepsy and seizures). The animal models are mysterious and often misleading, and the clinical trials for the biggest diseases in this area are well-known to be expensive and tricky to run. You've got your work cut out for you over here.

Meanwhile, the field of diabetes and metabolic disorders is better served. For type I diabetes, the main thing you can do, short of finding ever more precise ways of dosing insulin, is to figure out how to restore islet function and cure it, and that's where all the effort seems to be going. For type II diabetes, which is unfortunately a large market and getting larger all the time, there are a number of therapeutic options. And while there's probably room for still more, the field is getting undeniably a bit crowded. Add that to the very stringent cardiovascular safety requirements, and you're looking at a therapeutic that's not as attractive for new drug development as it was ten or fifteen years ago.

So I can see why a company would get out of these two areas, although it's also easy to think that it's a shame for this to happen. Neuroscience is in a particularly tough spot. The combination of uncertainly and big opportunities would tend to draw a lot of risk-taking startups to the area, but the massive clinical trials needed make it nearly impossible for a small company to get serious traction. So what we've been seeing are startups that, even more than other areas, are focused on getting to the point that a larger company will step in to pay the bills. That's not an abnormal business model, but it has its hazards, chief among them the temptation to run what trials you can with a primary goal of getting shiny numbers (and shiny funding) rather than finding out whether the drug has a more solid chance of working. Semi-delusional Phase II trials are a problem throughout the industry, but more so here.

Comments (58) + TrackBacks (0) | Category: Business and Markets | Diabetes and Obesity | Drug Development | The Central Nervous System

October 31, 2013

Merck's Aftermath

Email This Entry

Posted by Derek

So the picture that's emerging of Merck's drug discovery business after this round of cuts is confused, but some general trends seem to be present. West Point appears to have been very severely affected, with a large number of chemists shown the door, and reports tend to agree that bench chemists were disproportionately hit. The remaining department would seem to be top-heavy with managers.

Top-heavy, that is, unless the idea is that they're all going to be telling cheaper folks overseas what to make, that is. So is Merck going over to the Pfizer-style model? I regard this as unproven on this scale. In fact, I have an even lower opinion of it than that, but I'm sure that my distaste for the idea is affecting my perceptions, so I have to adjust accordingly. (Not everything you dislike is incorrect, just as not every person that's annoying is wrong).

But it's worth realizing that this is a very old idea. It's Taylorism, after Frederick Taylor, whose thinking was very influential in business circles about 100 years ago. (That Wikipedia article is written in a rather opinionated style, which the site has flagged, but it's a very interesting read and I recommend it). One of Taylor's themes was division of labor between the people thinking about the job and the people doing it, and a clearer statement of what Pfizer (and now Merck) are trying to do is hard to come by.

The problem is, we are not engaged in the kind of work that Taylorism and its descendants have been most successfully applied to. That, of course, is assembly line work, or any work flow that consists of defined, optimizable processes. R&D has proven. . .resistant to such thinking, to put it mildly. It's easy to convince yourself that drug discovery consists of and should be broken up into discrete assembly-line units, but somehow the cranks don't turn very smoothly when such systems are built. Bits and pieces of the process can be smoothed out and improved, but the whole thing still seems tangled, somehow.

In fact, if I can use an analogy from the post I put up earlier this morning, it reminds me of the onset of turbulence from a regime of laminar flow. If you model the kinds of work being done in some sort of hand-waving complexity space, up to a point, things run smoothly and go where they're supposed to. But as you start to add in key steps where the driving forces, the real engines of progress, are things that have to be invented afresh each time and are not well understood to start with, then you enter turbulence. The workflow become messy and unpredictable. If your Reynolds numbers are too high, no amount of polish and smoothing will stop you from seeing turbulent flow. If your industrial output depends too much on serendipity, on empiricism, and on mechanisms that are poorly understood, then no amount of managerial smoothing will make things predictable.

This, I think, is my biggest problem with the "Outsource the grunt work and leave the planning to the higher-ups" idea. It assumes that things work more smoothly than they really do in this business. I'm also reminded a bit of the Chilean "Project Cybersyn", which was to be a sort of control room where wise planners could direct the entire country's economy. One of the smaller reasons to regret the 1973 coup against Allende is that the chance was missed to watch this system bang up against reality. And I wonder what will happen as this latest drug discovery scheme runs into it, too.

Update: a Merck employee says in the comments that there hasn't been talk of more outsourcing, If that proves to be the case, then just apply the above comments to Pfizer.

Comments (98) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Life in the Drug Labs

October 30, 2013

More Magic Methyls, Please

Email This Entry

Posted by Derek

Medicinal chemists have long been familiar with the "magic methyl" effect. That's the dramatic change in affinity that can be seen (sometimes) with the addition of a single methyl group in just the right place. (Alliteration makes that the phrase of choice, but there are magic fluoros, magic nitrogens, and others as well). The methyl group is also particularly startling to a chemist, because it's seen as electronically neutral and devoid of polarity - it's just a bump on the side of the molecule, right?
Magic%20methyl.png
Some bump. There's a very useful new paper in Angewandte Chemie that looks at this effect, and I have to salute the authors. They have a number of examples from the recent literature, and it couldn't have been easy to round them up. The methyl groups involved tend to change rotational barriers around particular bonds, alter the conformation of saturated rings, and/or add what is apparently just the right note of nonpolar interaction in some part of a binding site. It's important to remember just how small the energy changes need to be for things like this to happen.

The latter part of the paper summarizes the techniques for directly introducing methyl groups (as opposed to going back to the beginning of the sequence with a methylated starting material). And the authors call for more research into such reactions: wouldn't it be useful to be able to just staple a methyl group in next to the nitrogen of a piperidine, for example, rather than having to redo the whole synthesis? There are ways to methylate aryl rings, via metal-catalyzed couplings or lithium chemistry, but alkyl methylations are thin on the ground. (The ones that exist tend to rely on those same sorts of mechanisms).

Methyl-group reagents of the same sort that have been found for trifluoromethyl groups in recent years would be welcome - the sorts of things you could expose a compound to and have it just methylate the most electrophilic or nucleophilic site(s) to see what you'd get. This is part of a general need for alkyl C-H activation chemistries, which people have been working on for quite a while now. It's one of the great undersolved problems in synthetic chemistry, and I hope that progress gets made. Otherwise I might have to break into verse again, and no one wants that.

Comments (17) + TrackBacks (0) | Category: Chemical News | Drug Development

October 22, 2013

Size Doesn't Matter. Does Anything?

Email This Entry

Posted by Derek

There's a new paper in Nature Reviews Drug Discovery that tries to find out what factors about a company influence its research productivity. This is a worthy goal, but one that's absolutely mined with problems in gathering and interpreting the data. The biggest one is the high failure rate that afflicts everyone in the clinic: you could have a company that generates a lot of solid ideas, turns out good molecules, gets them into humans with alacrity, and still ends up looking like a failure because of mechanistic problems or unexpected toxicity. You can shorten those odds, for sure (or lengthen them!), but you can never really get away from that problem, or not yet.

The authors have a good data set to work from, though:

It is commonly thought that small companies have higher research and development (R&D) productivity compared with larger companies because they are less bureaucratic and more entrepreneurial. Indeed, some analysts have even proposed that large companies exit research altogether. The problem with this argument is that it has little empirical foundation. Several high-quality analyses comparing the track record of smaller biotechnology companies with established pharmaceutical companies have concluded that company size is not an indicator of success in terms of R&D productivity1, 2.

In the analysis presented here, we at The Boston Consulting Group examined 842 molecules over the past decade from 419 companies, and again found no correlation between company size and the likelihood of R&D success. But if size does not matter, what does?

Those 842 molecules cover the period 2002-2011, and of them, 205 made it to regulatory approval. (Side note: does this mean that the historical 90% failure rate no longer applies? Update: turns out that's the number of compounds that made it through Phase I, which sounds more like it). There were plenty of factors that seemed to have no discernable influence on success - company size, as mention, public versus private financing, most therapeutic area choices, market size for the proposed drug or indication, location in the US, Europe, or Asia, and so on. In all these cases, the size of the error bars leave one unable to reject the null hypothesis (variation due to chance alone).

What factors do look like more than chance? The far ends of the therapeutic area choice, for one (CNS versus infectious disease, and these two only). But all the other indicators are a bit fuzzier. Publications (and patents) per R&D dollar spent are a positive sign, as is the experience (time-in-office) of the R&D heads. A higher termination rate in preclinical and Phase I correlated with eventual success, although I wonder if that's also a partial proxy for desperation, companies with no other option but to push on and hope for the best (see below for more on this point). A bit weirdly, frequent mention of ROI and the phrase "decision making" actually correlated positively, too.

The authors interpret most or all of these as proxy measurements of "scientific acumen and good judgement", which is a bit problematic. It's very easy to fall into circular reasoning that way - you can tell that the companies that succeeded had good judgement, because their drugs succeeded, because of their good judgement. But I can see the point, which is what most of us already knew: that experience and intelligence are necessary in this business, but not quite sufficient. And they have some good points to make about something that would probably help:

A major obstacle that we see to achieving greater R&D productivity is the likelihood that many low-viability compounds are knowingly being progressed to advanced phases of development. We estimate that 90% of industry R&D expenditures now go into molecules that never reach the market. In this context, making the right decision on what to progress to late-stage clinical trials is paramount in driving productivity. Indeed, researchers from Pfizer recently published a powerful analysis showing that two-thirds of the company's Phase I assets that were progressed could have been predicted to be likely failures on the basis of available data3. We have seen similar data privately as part of our work with many other companies.

Why are so many such molecules being advanced across the industry? Here, a behavioural perspective could provide insight. There is a strong bias in most R&D organizations to engage in what we call 'progression-seeking' behaviour. Although it is common knowledge that most R&D projects will fail, when we talk to R&D teams in industry, most state that their asset is going to be one of the successes. Positive data tends to go unquestioned, whereas negative data is parsed, re-analysed, and, in many cases, explained away. Anecdotes of successful molecules saved from oblivion often feed this dynamic. Moreover, because it is uncertain which assets will fail, the temptation is to continue working on them. This reaction is not surprising when one considers that personal success for team members is often tied closely to project progression: it can affect job security, influence within the organization and the ability to pursue one's passion. In this organizational context, progression-seeking behaviour is entirely rational.

Indeed it is. The sunk-cost fallacy should also be added in there, the "We've come so far, we can't quit now" thinking that has (in retrospect) led so many people into the tar pit. But they're right, many places end up being built to check the boxes and make the targets, not necessarily to get drugs out the door. If your organization's incentives are misaligned, the result is similar to trying to drive a nail by hitting it from an angle instead of straight on: all that force, being used to mess things up.

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

October 8, 2013

Forecasting Drug Sales: Har, Har.

Email This Entry

Posted by Derek

You're running a drug company, and you have a new product coming out. How much of it do you expect to sell? That sounds like a simple question to answer, but it's anything but, as a new paper in Nature Reviews Drug Discovery (from people at McKinsey, no less) makes painfully clear.

Given the importance of forecasting, we set out to investigate three questions. First, how good have drug forecasts been historically? And, more specifically, how good have estimates from sell-side analysts been at predicting the future? Second, what type of error has typically been implicated in the misses? Third, is there any type of drug that has been historically more easy or difficult to forecast?

The answer to the first question is "Not very good at all". They looked at drug launches from 2002-2011, a period which furnished hundreds of sales forecasts to work from. Over 60% of the consensus forecasts were wrong by 40% or more. Stop and think about that for a minute - and if you're in the industry, stop and think about the times you've seen these predictions made inside your own company. Remember how polished the PowerPoint slides were? How high-up the person was presenting them? How confident their voice was as they showed the numbers? All for nothing. If these figures had honest error bars on them, they'd stretch up and down the height of any useful chart. I'm reminded of what Fred Schwed had to say in Where Are the Customers' Yachts about stock market forecasts: "Concerning these predictions, we are about to ask: 1. Are they pretty good? 2. Are they slightly good? 3. Are they any damn good at all? 4. How do they compare with tomorrow's weather prediction you read in the paper? 5. How do they compare with the tipster horse race services?".
Forecasts.png
As you can see from the figure, the distribution of errors is quite funny-looking. If you start from the left-hand lowball side, you think you're going to be looking at a rough Gaussian curve, but then wham - it drops off, until you get to the wildly overoptimistic bin, which shows you that there's a terribly long tail stretching into the we're-gonna-be-rich category. This chart says a lot about human psychology and our approach to risk, and nothing it says is very complimentary. In case you're wondering, CNS and cardiovascular drugs tended to be overestimated compared to the average, and oncology drugs tended to be underestimated. That latter group is likely due to an underestimation of the possibility of new indications being approved.

Now, those numbers are all derived from forecasts in the year before the drugs launched. But surely things get better once the products got out into the market? Well, there was a trend for lower errors, certainly, but the forecasts were still (for example) off by 40% five years after the launch. The authors also say that forecasts for later drugs in a particular class were no more accurate than the ones for the first-in-class compounds. All of this really, really makes a person want to ask if all that time and effort that goes into this process is doing anyone any good at all.

Writing at Forbes, David Shaywitz (who also draws some lessons here from Taleb's Antifragile) doesn't seem to think that it is, but he doesn't think that anyone is going to want to hear about it:

Unfortunately, the new McKinsey report is unlikely to matter very much. Company forecasters will say their own data are better, and will point to examples of forecasts that happen to get it right. They will emphasize the elaborate methodologies they use, and the powerful algorithms they employ (all real examples from my time in the industry). Consultants, too, will continue to insist they can do it better.

And indeed, one of the first comments that showed up to his piece was from someone who appears to be doing just that. In fact, rather than show any shame about these numbers, plenty of people will see them as a marketing opportunity. But why should anyone believe the pitch? I think that this conclusion from the NRDD paper is a lot closer to reality:

Beware the wisdom of the crowd. The 'consensus' consists of well-compensated, focused professionals who have many years of experience, and we have shown that the consensus is often wrong. There should be no comfort in having one's own forecast being close to the consensus, particularly when millions or billions of dollars are on the line in an investment decision or acquisition situation.

The folks at Popular Science should take note of this. McKinsey Consulting has apparently joined the "War on Expertise"!

Comments (32) + TrackBacks (0) | Category: Business and Markets | Drug Development

September 20, 2013

Prosensa: One Duchenne Therapy Down

Email This Entry

Posted by Derek

In the post here the other day about Duchenne Muscular Dystrophy (DMD) I mentioned two other companies that are looking at transcriptional approaches: Prosensa (with GSK) and Sarepta. They've got antisense-driven exon-skipping mechanisms, rather than PTC's direct read-through one.

Well, Sarepta still does, anyway. Prosensa and GSK just announced clinical data on their agent, drisapersen, and it appears to have missed completely. The primary endpoint was a pretty direct one, total distance walked over six minutes, and they didn't make statistical significance versus placebo. This was over 48 weeks of treatment, and none of the secondary measures showed any signs either, from what I can see. I can't think of any way to spin this in any positive direction at all.

So drisapersen is presumably done. What does this say about Sarepta's candidate, eteplirsen? One the one hand, their major competitor has just been removed from the board. But on the other, their complete failure with such a closely related therapy can't help but raise doubts. I don't know enough about the differences between the two (PK?) to speculate, but it'll be interesting to see if Sarepta's stock zips up today, sells off, or (perhaps) fights to a draw between two groups of investors who are taking this news in very different ways.

That's the delirious fun of biotech investing. And that's just for the shareholders - you can imagine what it feels like to bet your whole company on this sort of thing. . .

Comments (4) + TrackBacks (0) | Category: Business and Markets | Clinical Trials | Drug Development

September 18, 2013

The Arguing Over PTC124 and Duchenne Muscular Dystrophy

Email This Entry

Posted by Derek

Does it matter how a drug works, if it works? PTC Therapeutics seems bent on giving everyone an answer to that question, because there sure seem to be a lot of questions about how ataluren (PTC124), their Duchenne Muscular Dystrophy (DMD) therapy, acts. This article at Nature Biotechnology does an excellent job explaining the details.

Premature "stop" codons in the DNA of DMD patients, particularly in the dystrophin gene, are widely thought to be one of the underlying problems in the disease. (The same mechanism is believed to operate in many other genetic-mutation-driven conditions as well. Ataluren is supposed to promote "read-through" of these to allow the needed protein to be produced anyway. That's not a crazy idea at all - there's been a lot of thought about ways to do that, and several aminoglycoside antibiotics have been shown to work through that mechanism. Of that class, gentamicin has been given several tries in the clinic, to ambiguous effect so far.

So screening for a better enhancer of stop codon read-through seems like it's worth a shot for a disease with so few therapeutic options. PTC did this using a firefly luciferase (Fluc) reporter assay. As with any assay, there are plenty of opportunities to get false positives and false negatives. Firefly luciferase, as a readout, suffers from instability under some conditions. And if its signal is going to wink out on its own, then a compound that stabilizes it will look like a hit in your assay system. Unfortunately, there's no particular market in humans for a compound that just stabilizes firefly luciferase.

That's where the argument is with ataluren. Papers have appeared from a team at the NIH detailing trouble with the FLuc readout. That second paper (open access) goes into great detail about the mechanism, and it's an interesting one. FLuc apparently catalyzes a reaction between PTC124 and ATP, to give a new mixed anhydride adduct that is a powerful inhibitor of the enzyme. The enzyme's normal mechanism involves a reaction between luciferin and ATP, and since luciferin actually looks like something you'd get in a discount small-molecule screening collection, you have to be alert to something like this happening. The inhibitor-FLuc complex keeps the enzyme from degrading, but the new PTC124-derived inhibitor itself is degraded by Coenzyme A - which is present in the assay mixture, too. The end result is more luciferase signal that you expect versus the controls, which looks like a hit from your reporter gene system - but isn't. PTC's scientists have replied to some of these criticisms here.

Just to add more logs to the fire, other groups have reported that PTC124 seems to be effective in restoring read-through for similar nonsense mutations in other genes entirely. But now there's another new paper, this one from a different group at Dundee, claiming that ataluren fails to work through its putative mechanism under a variety of conditions, which would seem to call these results into question as well. Gentamicin works for them, but not PTC124. Here's the new paper's take-away:

In 2007 a drug was developed called PTC124 (latterly known as Ataluren), which was reported to help the ribosome skip over the premature stop, restore production of functional protein, and thereby potentially treat these genetic diseases. In 2009, however, questions were raised about the initial discovery of this drug; PTC124 was shown to interfere with the assay used in its discovery in a way that might be mistaken for genuine activity. As doubts regarding PTC124's efficacy remain unresolved, here we conducted a thorough and systematic investigation of the proposed mechanism of action of PTC124 in a wide array of cell-based assays. We found no evidence of such translational read-through activity for PTC124, suggesting that its development may indeed have been a consequence of the choice of assay used in the drug discovery process.

Now this is a mess, and it's complicated still more by the not-so-impressive performance of PTC124 in the clinic. Here's the Nature Biotechnology article's summary:

In 2008, PTC secured an upfront payment of $100 million from Genzyme (now part of Paris-based Sanofi) in return for rights to the product outside the US and Canada. But the deal was terminated following lackluster data from a phase 2b trial in DMD. Subsequently, a phase 3 trial in cystic fibrosis also failed to reach statistical significance. Because the drug showed signs of efficacy in each indication, however, PTC pressed ahead. A phase 3 trial in DMD is now underway, and a second phase 3 trial in cystic fibrosis will commence shortly.

It should be noted that the read-through drug space has other players in it as well. Prosensa/GSK and Sarepta are in the clinic with competing antisense oligonucleotides targeting a particular exon/mutation combination, although this would probably taken them into other subpopulations of DMD patients than PTC is looking to treat.

If they were to see real efficacy, PTC could have the last laugh here. To get back to the first paragraph of this post, if a compound works, well, the big argument has just been won. The company has in vivo data to show that some gene function is being restored, as well they should (you don't advance a compound to the clinic just on the basis of in vitro assay numbers, no matter how they look). It could be that the compound is a false positive in the original assay but manages to work through some other mechanism, although no one knows what that might be.

But as you can see, opinion is very much divided about whether PTC124 works at all in the real clinical world. If it doesn't, then the various groups detailing trouble with the early assays will have a good case that this compound never should have gotten as far as it did.

Comments (26) + TrackBacks (0) | Category: Biological News | Business and Markets | Drug Assays | Drug Development

September 11, 2013

Merck Does Something. Or Not. Maybe Something Else Instead.

Email This Entry

Posted by Derek

There's some Merck news today, via FiercePharma. First off, their R&D head Roger Perlmutter sat down with some of the most prominent analysts for a chat about the company's direction - and they came out with two completely different stories. Big changes? Minor ones? I wonder if people were taking away what they wanted to hear to confirm what they'd already decided Merck should be doing. Seamus Fernandez, for example, apparently came away saying that he thought a major R&D restructuring was inevitable, but that's what he thought before he sat down. This sort of thing is worth keeping in mind when you hear some Wall St. types (particularly on the "sell side") going on authoritatively about what's happening inside a given company.

The other news is that Merck is handing off one of their oncology programs (the WEE-1 kinase inhibitor MRK-1775) to AstraZeneca. If I were a mean person given to saying unkind things, I'd say that this drug is at least now going to get a lot more money spent on it, because that's what AZ has been famous for. But I'll stick with what John Carroll had to say on Twitter: "So if $MRK thought 1775 was any damn good, would they outlicense it to $AZN?"

Comments (19) + TrackBacks (0) | Category: Business and Markets | Cancer | Drug Development

September 3, 2013

A Drug Delivery Method You Haven't Thought Of

Email This Entry

Posted by Derek

Word came last week that Google Ventures is funding a small outfit called Rani Biotechnology. They're trying to solve a small problem that's caught the attention of a few people now and then: making large protein drugs orally available.

Well, Google has a reputation for bankrolling some long-shot ideas, and any attempt to make proteins available this way is, by definition, a long shot. On Twitter, Andy Biotech sent around a link to this patent, which seems to have some of Rani's approach in it. If so, it's a surprising mixture of high and low tech. The drugs would be administered in a capsule, carefully formulated both chemically and physically. And when the capsule gets down into the small intestine, according to the patent, a spring-loaded mechanism is signaled to release and tiny needles pop out of its sides, delivering the protein cargo through the intestinal wall. To me, this looks less like an oral dosage than an i.v. that you swallow.

But getting that to work is probably easier than figuring out a way to make proteins survive in the gut. One can think of numerous ways that this could go wrong, but in drug development, there are always numerous ways that things could go wrong. The proof will be in the clinic, and I'm glad that someone is willing to pay to find out if this works.

Comments (55) + TrackBacks (0) | Category: Drug Development

August 27, 2013

Promise That Didn't Pan Out

Email This Entry

Posted by Derek

Luke Timmerman has a good piece on a drug (Bexxar) that looked useful, had a lot of time, effort, and money spent on it, but still never made any real headway. GSK has announced that they're ceasing production, and if there are headlines about that, I've missed them. Apparently there were only a few dozen people in the entire US who got the drug at all last year.

When you look at the whole story, there’s no single reason for failure. There were regulatory delays, manufacturing snafus, strong competition, reimbursement challenges, and issues around physician referral patterns.

If this story sounds familiar, it should—there are some striking similarities to what happened more recently with Dendreon’s sipuleucel-T (Provenge). If there’s a lesson here, it’s that cool science and hard medical evidence aren’t enough. When companies fail to understand the markets they are entering, the results can be quite ugly, especially as insurers tighten the screws on reimbursement. If more companies fail to pay proper attention to these issues, you can count on more promising drugs like Bexxar ending up on the industry scrap heap.

Comments (33) + TrackBacks (0) | Category: Business and Markets | Drug Development

August 22, 2013

Too Many Metrics

Email This Entry

Posted by Derek

Here's a new paper from Michael Shultz of Novartis, who is trying to cut through the mass of metrics for new compounds. I cannot resist quoting his opening paragraph, but I do not have a spare two hours to add all the links:

Approximately 15 years ago Lipinski et al. published their seminal work linking molecular properties with oral absorption.1 Since this ‘Big Bang’ of physical property analysis, the universe of parameters, rules and optimization metrics has been expanding at an ever increasing rate (Figure 1).2 Relationships with molecular weight (MW), lipophilicity,3 and 4 ionization state,5 pKa, molecular volume and total polar surface area have been examined.6 Aromatic rings,7 and 8 oxygen atoms, nitrogen atoms, sp3 carbon atoms,9 chiral atoms,9 non-hydrogen atoms, aromatic versus non-hydrogen atoms,10 aromatic atoms minus sp3 carbon atoms,6 and 11 hydrogen bond donors, hydrogen bond acceptors and rotatable bonds12 have been counted and correlated.13 In addition to the rules of five came the rules of 4/40014 and 3/75.15 Medicinal chemists can choose from composite parameters (or efficiency indices) such as ligand efficiency (LE),16 group efficiency (GE), lipophilic efficiency/lipophilic ligand efficiency (LipE17/LLE),18 ligand lipophilicity index (LLEAT),19 ligand efficiency dependent lipophilicity (LELP), fit quality scaled ligand efficiency (LE_scale),20 percentage efficiency index (PEI),21 size independent ligand efficiency (SILE), binding efficiency index (BEI) or surface binding efficiency index (SEI)22 and composite parameters are even now being used in combination.23 Efficiency of binding kinetics has recently been introduced.24 A new trend of anthropomorphizing molecular optimization has occurred as molecular ‘addictions’ and ‘obesity’ have been identified.25 To help medicinal chemists there are guideposts,21 rules of thumb,14 and 26 a property forecast index,27 graphical representations of properties28 such as efficiency maps, atlases,29 ChemGPS,30 traffic lights,31 radar plots,32 Craig plots,33 flower plots,34 egg plots,35 time series plots,36 oral bioavailability graphs,37 face diagrams,28 spider diagrams,38 the golden triangle39 and the golden ratio.40

He must have enjoyed writing that one, if not tracking down all the references. This paper is valuable right from the start just for having gathered all this into one place! But as you read on, you find that he's not too happy with many of these metrics - and since there's no way that they can all be equally correct, or equally useful, he sets himself the task of figuring out which ones we can discard. The last reference in the quoted section below is to the famous "Can a biologist fix a radio?" paper:

While individual composite parameters have been developed to address specific relationships between properties and structural features (e.g. solubility and aromatic ring count) the benefit may be outweighed by the contradictions that arise from utilizing several indices at once or the complexity of adopting and abandoning various metrics depending on the stage of molecular optimization. The average medicinal chemist can be overwhelmed by the ‘analysis fatigue’ that this plethora of new and contradictory tools, rules and visualizations now provide, especially when combined with the increasing number of safety, off-target, physicochemical property and ADME data acquired during optimization efforts. Decision making is impeded when evaluating information that is wrong or excessive and thus should be limited to the absolute minimum and most relevant available.

As Lazebnik described, sometimes the more facts we learn, the less we understand.

And he discards quite a few. All the equations that involve taking the log of potency and dividing by the heavy atom count (HAC), etc., are playing rather loose with the math:

To be valid, LE must remain constant for each heavy atom that changes potency 10-fold. This is not the case as a 15 HAC compound with a pIC50 of 3 does not have the same LE as a 16 HAC compound with a pIC50 of 4 (ΔpIC50 = 1, ΔHAC = 1, ΔLE = 0.07). A 10-fold change in potency per heavy atom does not result in constant LE as defined by Hopkins, nor will it result in a constant SILE, FQ or LLEAT values. These metrics do not mathematically normalize size or potency because they violate the quotient rule of logarithms. To obey this rule and be a valid mathematical function HAC would subtracted from pIC50 and rendered independent of size and reference potency.

Note that he's not recommending that last operation as a guideline, either. Another conceptual problem with plain heavy atom counting is that it treats all atoms the same, but that's clearly an oversimplification. But dividing by some form of molecular weight is an oversimplification, too: a nitrogen differs from an oxygen by a lot more than that 1 mass unit. (This topic came up here a little while back). But oversimplified or not - heck, mathematically valid or not - the question is whether these things help out enough when used as metrics in the real world. And Shultz would argue that they don't. Keeping LE the same (or even raising it) is supposed to be the sign of a successful optimization, but in practice, LE usually degrades. His take on this is that "Since lower ligand efficiency is indicative of both higher and lower probabilities of success (two mutually exclusive states) LE can be invalidated by not correlating with successful optimization."

I think that's too much of a leap - because successful drug programs have had their LE go down during the process, that doesn't mean that this was a necessary condition, or that they should have been aiming for that. Perhaps things would have been even better if they hadn't gone down (although I realize that arguing from things that didn't happen doesn't have much logical force). Try looking at it this way: a large number of successful drug programs have had someone high up in management trying to kill them along the way, as have (obviously) most of the unsuccessful ones. That would mean that upper management decisions to kill a program are also indicative of both higher and lower probabilities of success, and can thus be invalidated, too. Actually, he might be on to something there.

Shultz, though, finds that he's not able to invalidate LipE (or LLE), variously known as ligand-lipophilicity efficiency or lipophilic ligand efficiency. That's p(IC50) - logP, which at least follows the way that logarithms of quotients are supposed to work. And it also has been shown to improve during known drug optimization campaigns. The paper has a thought experiment, on some hypothetical compounds, as well as some data from a tankyrase inhibitor series that seem to show the LipE behave more rationally than other metrics (which sometimes start pointing in opposite directions).

I found the chart below to be quite interesting. It uses the cLogP data from Paul Leeson and Brian Springthorpe's original LLE paper (linked in the above paragraph) to show what change in potency you would expect when you change a hydrogen in your molecule to one of the groups shown if you're going to maintain a constant LipE value. So while hydrophobic groups tend to make things more potent, this puts a number on it. A t-butyl, for example, should make things about 50-fold more potent if it's going to pull its weight as a ball of grease. (Note that we're not talking about effects on PK and tox here, just sheer potency - if you play this game, though, you'd better be prepared to keep an eye on things downstream).
LipE%20chart.png
On the other end of the scale, a methoxy should, in theory, cut your potency roughly in half. If it doesn't, that's a good sign. A morpholine should be three or four times worse, and if it isn't, then it's found something at least marginally useful to do in your compound's binding site. What we're measuring here is the partitioning between your compound wanting to be in solution, and wanting to be in the binding site. More specifically, since logP is in the equation, we're looking at the difference in the partitioning of your compound between octanol and water, versus its partitioning between the target protein and water. I think we can all agree that we'd rather have compounds that bind because they like something about the active site, rather than just fleeing the solution phase.

So in light of this paper, I'm rethinking my ligand-efficiency metrics. I'm still grappling with how LipE performs down at the fragment end of the molecular weight scale, and would be glad to hear thoughts on that. But Shultz's paper, if it can get us to toss out a lot of the proposed metrics already in the literature, will have done us all a service.

Comments (38) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | Pharmacokinetics

August 19, 2013

Is The FDA the Problem?

Email This Entry

Posted by Derek

A reader sends along this account of some speakers at last year's investment symposium from Agora Financial. One of the speakers was Juan Enriquez, and I thought that readers here might be interested in his perspective.

First, the facts. According to Enriquez:

Today, it costs 100,000 times less than it once did to create a three-dimensional map of a disease-causing protein

There are about 300 times more of these disease proteins in databases now than in times past

The number of drug-like chemicals per researcher has increased 800 times

The cost to test a drug versus a protein has decreased ten-fold

The technology to conduct these tests has gotten much quicker
Now here’s Enriquez’s simple question:

"Given all these advances, why haven’t we cured cancer yet? Why haven’t we cured Alzheimer’s? Why haven’t we cured Parkinson’s?"

The answer likely lies in the bloated process and downright hostile-to-innovation climate for FDA drug approvals in this day and age...

According to Enriquez, this climate has gotten so bad that major pharmaceuticals companies have begun shifting their primary focus from R&D of new drugs to increased marketing of existing drugs — and mergers and acquisitions.

I have a problem with this point of view, assuming that it's been reported correctly. I'll interpret this as makes-a-good-speech exaggeration, but Enriquez himself has most certainly been around enough to realize that the advances that he speaks of are not, by themselves, enough to lead to a shower of new therapies. That's a theme that has come up on this site several times, as well it might. I continue to think that if you could climb in a time machine and go back to, say, 1980 with these kinds of numbers (genomes sequenced, genes annotated, proteins with solved structures, biochemical pathways identified, etc.), that everyone would assume that we'd be further along, medically, than we really are by now. Surely that sort of detailed knowledge would have solved some of the major problems? More specifically, I become more sure every year that drug discovery groups of that era might be especially taken aback at how the new era of target-based molecular-biology-driven drug research has ended up working out: as a much harder proposition than many might have thought.

So it's a little disturbing to see the line taken above. In effect, it's saying that yes, all these advances have been enough to release a flood of new therapies, which means that there must be something holding them back (in this case, apparently, the FDA). The thing is, the FDA probably has slowed things down - in fact, I'd say it almost certainly has. That's part of their job, insofar as the slowdowns are in the cause of safety.

And now we enter the arguing zone. On the one side, you have the reducio ad absurdum argument that yes, we'd have a lot more things figured out if we could just go directly into humans with our drug candidates instead of into mice, so why don't we just? (That's certainly true, as far as it goes. We would surely kill off a fair number of people doing things that way, as the price of progress, but (more) progress there would almost certainly be. But no one - no one outside of North Korea, anyway - is seriously proposing this style of drug discovery. Someone who agrees with Enriquez's position would regard it as a ridiculous misperception of what they're calling for, designed to make them look stupid and heartless.

But I think that Enriquez's speech, as reported, is the ad absurdum in the other direction. The idea that the FDA is the whole problem is also an oversimplification. In most of these areas, the explosion of knowledge laid out above has not yet let to an explosion of understanding. You'd get the idea that there was this big region of unexplored stuff, and now we've pretty much explored it, so we should really be ready to get things done. But the reality, as I see it, as that there was this big region of unexplored stuff, and we set into to explore it, and found out that it was far bigger than we'd even dreamed. It's easy to get your scale of measurement wrong. It's quite similar to the way that humanity didn't realize just how large the Earth was, then how small it was compared to the solar system (and how off-center), and how non-special our sun was in the immensity of the galaxy, not to mention how many other galaxies there are and how far away they lie. Biology and biochemistry aren't quite on that scale of immensity, but they're plenty big enough.

Now, when I mentioned that we'd surely have killed off more people by doing drug research by the more direct routes, the reply is that we've been killing people off by moving too slowly as well. That's a valid argument. But under the current system, we choose to have people die passively, through mechanisms of disease that are already operating, while under the full-speed-ahead approaches, we might lower that number by instead killing off some others in a more active manner. It's typically human of us to choose the former strategy. The big questions are how many people would die in each category as we moved up and down the range between the two extremes, and what level of each casualty count we'd find "acceptable".

So while it's not crazy to say that we should be less risk-averse, I think it is silly to say that the FDA is the only (or even main) thing holding us back. I think that this has a tendency to bring on both unnecessary anger directed at the agency, and raise unfulfillable hopes in regards to what the industry can do in the near term. Neither of those seem useful to me.

Full disclosure - I've met Enriquez, three years ago at SciFoo. I'd be glad to give him a spot to amplify and extend his remarks if he'd like one.

Comments (40) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Regulatory Affairs

August 14, 2013

A Regeneron Profile

Email This Entry

Posted by Derek

In the spirit of this article about Regeneron, here's a profile in Forbes of the company's George Yancopoulos and Leonard Schleifer. There are several interesting things in there, such as these lessons from Roy Vagelos (when he became Regeneron's chairman after retiring from Merck):

Lesson one: Stop betting on drugs when you won’t have any clues they work until you finish clinical trials. (That ruled out expanding into neuroscience–and is one of the main reasons other companies are abandoning ailments like Alzheimer’s.) Lesson two: Stop focusing only on the early stages of drug discovery and ignoring the later stages of human testing. It’s not enough to get it perfect in a petri dish. Regeneron became focused on mitigating the two reasons that drugs fail: Either the biology of the targeted disease is not understood or the drug does something that isn’t expected and causes side effects.

They're not the only ones thinking this way, of course, but if you're not, you're likely to run into big (and expensive) trouble.

Comments (14) + TrackBacks (0) | Category: Drug Development | Drug Industry History

August 13, 2013

Druggability: A Philosophical Investigation

Email This Entry

Posted by Derek

I had a very interesting email the other day, and my reply to it started getting so long that I thought I'd just turn it into a blog post. Here's the question:

How long can we expect to keep finding new drugs?

By way of analogy, consider software development. In general, it's pretty hard to think of a computer-based task that you couldn't write a program to do, at least in principle. It may be expensive, or may be unreasonably slow, but physical possibility implies that a program exists to accomplish it.

Engineering is similar. If it's physically possible to do something, I can, in principle, build a machine to do it.

But it doesn't seem obvious that the same holds true for drug development. Something being physically possible (removing plaque from arteries, killing all cancerous cells, etc.) doesn't seem like it would guarantee that a drug will exist to accomplish it. No matter how much we'd like a drug for Alzheimer's, it's possible that there simply isn't one.

Is this accurate? Or is the language of chemistry expressive enough that if you can imagine a chemical solution to something, it (in principle) exists. (I don't really have a hard and fast definition of 'drug' here. Obviously all bets are off if your 'drug' is complicated enough to act like a living thing.)

And if it is accurate, what does that say about the long-term prospects for the drug industry? Is there any risk of "running out" of new drugs? Is drug discovery destined to be a stepping-stone until more advanced medical techniques are available?

That's an interesting philosophical point, and one that had never occurred to me in quite that way. I think that's because programming is much more of a branch of mathematics. If you've got a Universal Turing Machine and enough tape to run through it, then you can, in theory, run any program that ever could be run. And any process that can be broken down into handling ones and zeros can be the subject of a program, so the Church-Turing thesis would say that yes, you can calculate it.

But biochemistry is most definitely a different thing, and this is where a lot of people who come into it from the math/CS/engineering side run into trouble. There's a famous (infamous) essay called "Can A Biologist Fix A Radio" that illustrates the point well. The author actually has some good arguments, and some legitimate complaints about the way biochemistry/molecular biology has been approached. But I think that his thesis breaks down eventually, and I've been thinking on and off for years about just where that happens and how to explain what makes things go haywire. My best guess is algorithmic complexity. It's very hard to reduce the behavior of biochemical systems to mathematical formalism. The whole point of formal notation is to express things in the most compact and information-rich way possible, but trying to compress biochemistry in this manner doesn't give you much of an advantage, at least not in the ways we've tried to do it so far.

To get back to the question at hand, let's get philosophical. I'd say that at the most macro level, there are solutions to all the medical problems. After all, we have the example of people who don't have multiple sclerosis, who don't have malaria, who don't have diabetes or pancreatic cancer or what have you. We know that there are biochemical states where these things do not exist; the problem is then to get an individual patient's state back to that situation. Note that this argument does not apply to things like life extension, limb regeneration, and so on: we don't know if humans are capable of these things or not yet, even if there may be some good arguments to be made in their favor. But we know that there are human brains without Alzheimer's.

To move down a level from this, though, the next question is whether there are ways to put a patient's cells and organs back into a disease-free state. In some cases, I think that the answer has to be, for all practical purposes, "No". I tend to think that the later stages of Alzheimer's (for example) are in fact incurable. Neurons are dead and damaged, what was contained in them and in their arrangement is gone, and any repair system can only go so far. Too much information has been lost and too much entropy has been let in. I would like to be wrong about this, but I don't think I am.

But for less severe states and diseases, you can imagine various interventions - chemical, surgical, genetic - that could restore things. So the question here becomes whether there are drug-like solutions. The answer is tricky. If you look at a biochemical mechanism and can see that there's a particular pathway involving small molecules, then certainly, you can say that there could be a molecule to be found as a treatment, even if we haven't found it yet. But the first part of that last sentence has to be unpacked.

Take diabetes. Type I diabetes is proximately caused by lack of insulin, so the solution is to take insulin. And that works, although it's certainly not a cure, since you have to take insuin for the rest of your life, and it's impossible to take it in a way that perfectly mimics the way your body would adminster it, etc. A cure would be to have working beta-cells again that respond just the way they're supposed to, and that's less likely to be achieved through a drug therapy. (Although you could imagine some small molecule that affects a certain class of stem cell, causing it to start the program to differentiate into a fully-formed beta cell, and so on). You'd also want to know why the original population of cells died in the first place, and how to keep that from happening again, which might also take you to some immunological and cell-cycle pathways that could be modulated by drug molecules. But all of these avenues might just as easily take you into genetically modified cloned cell lines and surgical implantation, too, rather than anything involving small-molecule chemistry.

Here's another level of complexity, then: insulin is certainly a drug, but it's not a small molecule of the kind I'd be making. Is there a small molecular that can replace it? You'd do very well with that indeed, but the answer (I think) is "probably not". If you look at the receptor proteins that insulin binds to, the recognition surfaces that are used are probably larger than small molecules can mimic. No one's ever found a small molecule insulin mimetic, and I don't think anyone is likely to. (On the other hand, if you're trying to disrupt a protein-protein interaction, you have more hope, although that's still an extremely difficult target. We can disrupt things a lot more easily than we can make them work). Even if you found a small-molecule-insulin, you'd be faced with the problem of dosing it appropriately, which is no small challenge for a tightly and continuously regulated system like that one. (It's no small challenge for administering insulin itself, either).

And even for mechanisms that do involve small-molecule signaling, like the G-protein coupled receptors, there are still things to worry about. Take schizophrenia. You can definitely see problems with neural systems in the brain when you study that disease, and these neurons respond to, among other things, small-molecue neurotransmitters that the body makes and uses itself - dopamine, serotonin, acetylcholine and others. There are a certain number of receptors for each of those, and although we don't have all the combinations yet, I could imagine, on a philosophical level, that we could eventually have selective drugs that are agonists, antagonists, partial agonists, inverse agonists, what have you at all the subtypes. We have quite a few of them now, for some of the families. And I can even imagine that we could eventually have most or all of the combinations: a molecule that's a dopamine D2 agonist and a muscarinic M4 antagonist, all in one, and so on and so on. That's a lot more of a stretch, to be honest, but I'll stipulate that it's possible.

So you have them all. Now, which ones do you give to help a schizophrenic? We don't know. We have guesses and theories, but most of them are surely wrong. Every biochemical theory about schizophrenia is either wrong or incomplete. We don't know what goes wrong, or why, or how, or what might be done to bend things back in the right direction. It might be that we're in the same area as Alzheimer's: perhaps once a person's brain has developed in such a way that it slips into schizophrenia, that there is no way at all to rewire things, in the same way that we can't ungrow a tree in order to change the shape of its canopy. I've no idea, and we're going to know a lot more about the brain by the time we can answer that one.

So one problem with answering this question is that it's bounded not so much by chemistry as by biology. Lots and lots of biology, most of it unknown. But thinking in terms of sheer chemistry is interesting, too. Consider "The Library of Babel", the famous story by Jorge Luis Borges. It takes place in some sort of universe that is no more (and no less) than a vast library containing every possible book that can be be produced with a 25-character set of letters and punctuation marks. This is, as a bit of reflection will show, a very, very large number, one large enough to contain everything that can possibly be written down. And all the slight variations. And all the misprints. And all the scrambled coded versions of everything, and so on and so on. (W. v. O. Quine extended this idea to binary coding, which brings you back to computability).

Now think about the universe of drug-like molecules. It is also very large, although it is absolutely insignificant compared to the terrifying Library of Babel. (It's worth noting that the Library contains all of the molecules that can ever exist, coded in SMILES strings - that thought just occurred to me at this very moment, and gives me the shivers). The universe of proteins works that way, too - an alphabet of twenty-odd letters for amino acids gives you the exact same situation as the Library, and if you imagine some hideous notation for coding in all the folding variants and post-translational modifications, all the proteins are written down as well.

These, then, encompass everything chemical compound up to some arbitrary size, and the original question is, is this enough? Are there questions for which none of these words are the answer? That takes you into even colder and deeper philosophical waters. Wittgenstein (among many others) wondered the same thing about our own human languages, and seems to have decided that there are indeed things that cannot be expressed, and that this marks the boundary of philosophy itself. Famously, his Tractacus ends with the line "Wovon man nicht sprechen kann, darüber muss man schweigen": whereof we cannot speak, we must pass over in silence.

We're not at that point in the language of chemistry and pharmacology yet, and it's going to be a long, long time before we ever might be. Just the fact, though, that computability seems like such a more reasonable proposition in computer science than druggability does in biochemistry tells you a great deal about how different the two fields are.

Update: On the subject of computabiity, I'm not sure how I missed the chance to bring Gödel's Incompleteness Theorem into this, just to make it a complete stewpot of math and philosophy. But the comments to this post point out that even if you can write a program, you cannot be sure whether it will ever finish the calculation. This Halting Problem is one of the first things ever to be proved formally undecidable, and the issues it raises are very close to those explored by Gödel. But as I understand it, this is decidable for a machine with a finite amount of memory, running a deterministic program. The problem is, though, that it still might take longer than the expected lifetime of the universe to "halt", which leaves you, for, uh, practical purposes, in pretty much the same place as before. This is getting pretty far afield from questions of druggability, though. I think.

Comments (40) + TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico

August 12, 2013

How Much to Develop a Drug? An Update.

Email This Entry

Posted by Derek

I've referenced this Matthew Herper piece on the cost of drug development several times over the last few years. It's the one where he totaled up pharma company R&D expenditures (from their own financial statements) and then just divided that by the number of drugs produced. Crude, but effective - and what it said was that some companies were spending ridiculous, unsustainable amounts of money for what they were getting back.

Now he's updated his analysis, looking at a much longer list of companies (98 of them!) over the past ten years. Here's the list, in a separate post. Abbott is at the top, but that's misleading, since they spent R&D money on medical devices and the like, whose approvals don't show up in the denominator.

But that's not the case for #2, Sanofi: 6 drugs approved during that time span, at a cost, on their books of ten billion dollars per drug. Then you have (as some of you will have guessed) AstraZeneca - four drugs at 9.5 billion per. Roche, Pfizer, Wyeth, Lilly, Bayer, Novartis and Takeda round out the top ten, and even by that point we're still looking at six billion a whack. One large company that stand out, though, is Bristol-Myers Squibb, coming in at #22, 3.3 billion per drug. The bottom part of the list is mostly smaller companies, often with one approval in the past ten years, and that one done reasonably cheaply. But three others that stand out as having spent significant amounts of money, while getting something back for it, are Genzyme, Shire, and Regeneron. Genzyme, of course, has now been subsumed in that blazing bonfire of R&D cash known as Sanofi, so that takes care of that.

Sixty-six of the 98 companies studied launched only one drug this decade. The costs borne by these companies can be taken as a rough estimate of what it takes to develop a single drug. The median cost per drug for these singletons was $350 million. But for companies that approve more drugs, the cost per drug goes up – way up – until it hits $5.5 billion for companies that have brought to market between eight and 13 medicines over a decade.

And he's right on target with the reason why: the one-approval companies on the list were, for the most part, lucky the first time out. They don't have failures on their books yet. But the larger organizations have had plenty of those to go along with the occasional successes. You can look at this situation more than one way - if the single-drug companies are an indicator of what it costs to get one drug discovered and approved, then the median figure is about $350 million. But keep in mind that these smaller companies can tend to go after a different subset of potential drugs. They're a bit more likely to pick things with a shorter, more defined clinical path, even if there isn't as big a market at the end, in order to have a better story for their investors.

Looking at what a single successful drug costs, though, isn't a very good way to prepare for running a drug company. Remember, the only small companies on this list are the ones that have suceeded, and many, many more of them spent all their money on their one shot and didn't make it. That's what's reflected in the dollars-per-drug figures for the larger organizations, that and the various penalties for being a huge organization. As Herper says:

Size has a cost. The data support the idea that large companies may be spend more per drug than small ones. Companies that spent more than $20 billion in R&D over the decade spent $6.3 billion per new drug, compared to $2.8 billion for those that had budgets of between $5 billion and $10 billion. Some CEOs, notably Christopher Viehbacher at Sanofi, have faced low R&D productivity in part by cutting the budget. This may make sense in light of this data. But it is worth noting that the bigger firms brought twice as many drugs to market. It still could be that the difference between these two groups is due to smaller companies not bearing the full financial weight of the risk of failure.

There are other factors that kick these numbers around a bit. As Herper points out, there's a tax advantage for R&D expenditures, so there's no incentive to under-report them (but there's also an IRS to keep you from going wild over-reporting them, too). And some of the small companies on the list picked up their successes by taking on failed programs from larger outfits, letting them spend a chunk of R&D cash on the drugs beforehand. But overall, the picture is just about as grim as you'd have figured, if not a good deal more so. Our best hope is that this is a snapshot of the past, and not a look into the future. Because we can't go on like this.

Comments (33) + TrackBacks (0) | Category: Drug Development | Drug Industry History

August 9, 2013

An Interview With A GSK Shanghai Scientist

Email This Entry

Posted by Derek

Here's an interview with Liu Xeubin, formerly of GlaxoSmithKline in China. That prospect should perk up the ears of anyone who's been following the company's various problems and scandals in that country.

Liu Xuebin recalls working 12-hour shifts and most weekends for months, under pressure to announce research results that would distinguish his GlaxoSmithKline Plc (GSK) lab in China as a force in multiple sclerosis research.
It paid off -- for a while. Nature Medicine published findings about a potential new MS treatment approach in January 2010 and months later Liu was promoted to associate director of Glaxo’s global center for neuro-inflammation research in Shanghai. Two months ago, his career unraveled. An internal review found data in the paper was misrepresented. Liu, 45, who stands by the study, was suspended from duty on June 8 and quit two days later.

Liu was the first author on the disputed paper, but he says that he stands by it, and opposed a retraction (only he and one other author, out of 18, did so). He had been at the NIH for several years before being hired back to Shanghai by Glaxo, which turned out to be something of a change:

“This was my first job in industry and there was a very different culture,” Liu said behind thick, rimless glasses and dressed in a short-sleeve checked shirt tucked neatly into his belted trousers. “I was also not experienced with compliance back then, and we didn’t pay enough attention to things such as recording of reports from our collaborators.”

There was also a culture in which Glaxo scientists were grouped into competitive teams, known as discovery performance units, which vied internally for funds every three years, he said. Those who failed to meet certain targets risked being disbanded.

What I find odd is Liu's emphasis on publishing, and publishing first. That seems like a very academic mindset - I have to tell you, over my time in industry, rarely have I ever felt a sense of urgency to publish my results in a journal. And even those exceptions have been for other reasons, usually the "If we're going to write this stuff up, now's the time" sort. Never have I felt that we were racing to get something into, say, Nature Medicine before someone else did. Getting something patented before someone else, into the clinic before someone else? Oh, yes indeed. But not into some journal.

But neither have I been part of a far-flung research site, on which a lot of money had been spent, trying to show that it was all worthwhile. Maybe that's the difference. Even so, if the results that the Shanghai group got were really important for an approach to multiple sclerosis therapy, that's all the more reason why the findings should have spoken for themselves inside the company (and been the subject of immediate further development, too). We don't have to get Nature Medicine (or whoever) to validate things for us: "Oh, wow, that stuff must be real, the journal accepted our paper". A company doesn't demonstrate that it finds something valuable by sending it out to a big-name journal, at least not at first: it does that by spending more time and money on the idea.

But Liu doesn't talk the way that I would expect in this article, and I feel sure that the Bloomberg reporter on this piece didn't pick up on it. There's no "We delivered a new MS program, we validated a whole new group of drug targets, we identified a high-profile clinical candidate that went immediately into development". That's how someone in drug R&D would put it. Not "We were racing to publish our results". It's all quite odd.

Comments (18) + TrackBacks (0) | Category: Drug Development | The Dark Side

July 22, 2013

The NIH's Drug Repurposing Program Gets Going

Email This Entry

Posted by Derek

Here's an update on the NIH's NCATS program to repurpose failed clinical candidates from the drug industry. I wrote about this effort here last year, and expressed some skepticism. It's not that I think that trying drugs (or near-drugs) for other purposes is a bad idea prima facie, because it isn't. I just wonder about the way the way the NIH is talking about this, versus its chances for success.

As was pointed out last time this topic came up, the number of failed clinical candidates involved in this effort is dwarfed by the number of approved compounds that could also be repurposed - and have, in fact, been looked at for years for just that purpose. The success rate is not zero, but it has not been a four-lane shortcut to the promised land, either. And the money involved here ($12.7 million split between nine grants) is, as that Nature piece correctly says, "not much". Especially when you're going after something like Alzheimer's:

Strittmatter’s team is one of nine that won funding last month from the NIH’s National Center for Advancing Translational Sciences (NCATS) in Bethesda, Maryland, to see whether abandoned drugs can be aimed at new targets. Strittmatter, a neuro­biologist at Yale University in New Haven, Connecticut, hopes that a failed cancer drug called saracatinib can block an enzyme implicated in Alzheimer’s. . .

. . .Saracatinib inhibits the Src family kinases (SFKs), enzymes that are commonly activated in cancer cells, and was first developed by London-based pharmaceutical company Astra­Zeneca. But the drug proved only marginally effective against cancer, and the company abandoned it — after spending millions of dollars to develop it through early human trials that proved that it was safe. With that work already done, Strittmatter’s group will be able to move the drug quickly into testing in people with early-stage Alzheimer’s disease.

The team plans to begin a 24-person safety and dosing trial in August. If the results are good, NCATS will fund the effort for two more years, during which the scientists will launch a double-blind, randomized, placebo-controlled trial with 159 participants. Over a year, the team will measure declines in glucose metabolism — a marker for progression of Alzheimer’s disease — in key brain regions, hoping to find that they have slowed.

If you want some saracatanib, you can buy some, by the way (that's just one of the suppliers). And since AZ has already taken this through phase I, then the chances for it passing another Phase I are very good indeed. I will not be impressed by any press releases at that point. The next step, the Phase IIa with 159 people, is as far as this program is mandated to go. But how far is that? One year is not very long in a population of Alzheimer's patients, and 159 patients is not all that many in a disease that heterogeneous. And the whole trial is looking at a secondary marker (glucose metabolism) which (to the best of my knowledge) has not demonstrated any clinical utility as a measure of efficacy for the disease. From what I know about the field, getting someone at that point to put up the big money for larger trials will not be an easy sell.

I understand the impulse to go after Alzheimer's - who dares, wins, eh? But given the amount of money available here, I think the chances for success would be better against almost any other disease. It is very possible to take a promising-looking Alzheimer's candidate all the way through a multi-thousand-patient multiyear Phase III and still wipe out - ask Eli Lilly, among many others. You'd hope that at least a few of them are in areas where there's a shorter, more definitive clinical readout.

Here's the list, and here's the list of all the compounds that have been made available to the whole effort so far. Update: structures here. The press conference announcing the first nine awards is here. The NIH has not announced what the exact compounds are for all the grants, but I'm willing to piece it together myself. Here's what I have:

One of them is saracatanib again, this time for lymphangioleiomyomatosis. There's also an ER-beta agonist being looked at for schizophrenia, a J&J/Janssen nicotinic allosteric modulator for smoking cessation, and a Pfizer ghrelin antagonist for alcoholism (maybe from this series?). There's a Sanofi compound for Duchenne muscular dystrophy, which the NIH has studiously avoided naming, although it's tempting to speculate that it's riferminogene pecaplasmide, a gene-therapy vector for FGF1. But Genetic Engineering News says that there are only seven compounds, with a Sanofi one doubling up as well as the AZ kinase inhibitor, so maybe this one is the ACAT inhibitor below. Makes more sense than a small amount of money trying to advance a gene therapy approach, for sure.

There's an endothelin antagonist for peripheral artery disease. Another unnamed Sanofi compound is being studied for calcific aortic valve stenosis, and my guess is that it's canosimibe, an ACAT inhibitor, since that enzyme has recently been linked to stenosis and heart disease. Finally, there's a Pfizer glycine transport inhibitor being looked at for schizophrenia, which seems a bit odd, because I was under the impression that this compound had already failed in the clinic for that indication. They appear to have some other angle.

So there you have it. I look forward to seeing what comes of this effort, and also to hearing what the NIH will have to say at that point. We'll check in when the time comes!

Update: here's more from Collaborative Chemistry. And here's a paper they published on the problems of identifying compounds for initiatives like this:

In particular, it is notable that NCATS provides on its website [31] only the code number, selected international non-proprietary names (INN) and links to more information including mechanism of action, original development indication, route of administration and formulation availability. However, the molecular structures corresponding to the company code numbers were not included. Although we are highly supportive of the efforts of NCATS to promote drug repurposing in the context of facilitating and funding proposals, we find this omission difficult to understand for a number of reasons. . .

They're calling for the NIH (and the UK initiative in this area as well) to provide real structures and IDs for the compounds they're working with. It's hard to argue against it!

Comments (8) + TrackBacks (0) | Category: Academia (vs. Industry) | Clinical Trials | Drug Development

July 19, 2013

Salary Freeze at Lilly

Email This Entry

Posted by Derek

We now return to our regularly schedule program around here - or at least, Eli Lilly is now returning to theirs. The company announced that they're freezing salaries for most of the work force, in an attempt to save hundreds of millions of dollars in advance of their big patent expirations. Some bonuses will be reduced as well, they say, but that leaves a lot of room. Higher-ups don't look for increases in base pay as much as they look for bonuses, options, and restricted shares (although, to be fair, these are often awarded as a per cent of salary).

‘‘This action is necessary to withstand the impact of upcoming patent expirations and to support the launch of our large phase III pipeline,’’ Chief Executive Officer John Lechleiter, 59, said in a letter to employees today, a copy of which was obtained by Bloomberg. ‘‘The current situation requires us to take the appropriate action now to secure our company’s future. We can’t allow ourselves to let up and fail to make the tough choices.”

Lechleiter himself has not had a raise since 2010, it appears, although I'm not sure if his non-salary compensation follows the same trend. If anyone has the time to dig through the company's last few proxy statements, feel free, but actually figuring out what a chief executive is really paid is surprisingly difficult. (I remember an article a few years ago where several accountants and analysts were handed the same batch of SEC filings and all of them came out with different compensation numbers).

But there's not doubt that Lilly is in for it, something that has been clear for some time now. The company's attempts to shore up its clinical pipeline haven't gone well, and it looks like (more and more) they're putting a lot of their hopes on a success in Alzheimer's. If they see anything, that will definitely turn the whole situation around - between their diagnostic branch and a new therapeutic, they'll own the field, and a huge field it is. But the odds of this happening are quite low. The most likely outcome, it seems to me, is equivocal data that will be used to put pressure on the FDA, etc., to approve something, anything, for Alzheimer's.

It's worth remembering that it wasn't very long ago at all that the higher-ups at Lilly were telling everyone that all would be well, that they'd be cranking out two big new drugs a year by now. Hasn't happened. Since that 2010 article, they've had pretty much squat - well, Jentadueto, which is Boehringer Ingleheim's linagliptin, which Lilly is co-marketing, with metformin added. Earlier this year, they were talking up plans for five regulatory submissions in the near future, but that figure is off now that enzastaurin has already bombed in Phase III. Empagliflozin and ramucirumab are still very much alive, but will be entering crowded markets if they make it through. Dulaglutide is holding up well, though.

But will these be enough to keep Lilly from getting into trouble? That salary freeze is your answer: no, they will not. All the stops must be pulled out, and the ones after this will be even less enjoyable.

Comments (19) + TrackBacks (0) | Category: Business and Markets | Drug Development

Good Advice: Get Lost!

Email This Entry

Posted by Derek

I thought everyone could use something inspirational after the sorts of stories that have been in the news the last few days. Here's a piece at FierceBiotech on Regeneron, a company that's actually doing very well and expanding. And how have they done it?

Regeneron CEO Dr. Leonard "Len" Schleifer, who founded the company in 1988, says he takes pride in the fact that his team is known for doing "zero" acquisitions. All 11 drugs in the company's clinical-stage pipeline stem from in-house discoveries. He prefers a science-first approach to running a biotech company, hiring Yancopoulos to run R&D in 1989, and he endorsed a 2012 pay package for the chief scientist that was more than twice the size of his own compensation last year.

Scientists run Regeneron. Like Yancopoulos, Schleifer is an Ivy League academic scientist turned biotech executive. Regeneron gained early scientific credibility with a 1990 paper in the journal Science on cloning neurotrophin factor, a research area that was part of a partnership with industry giant Amgen. Schleifer has recruited three Nobel Prize-winning scientists to the board of directors, which is led by long-time company Chairman Dr. P. Roy Vagelos, who had a hand in discovering the first statin and delivering a breakthrough treatment for a parasitic cause of blindness to patients in Africa.

"I remember these people from Pfizer used to go around telling us, 'You know, blockbusters aren't discovered, they're made,' as though commercial people made the blockbuster," Schleifer said in an interview. "Well, get lost. Science, science, science--that's what this business is about."

I don't know about you, but that cheers me up. That kind of attitude always does!

Comments (10) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

July 17, 2013

The GSK Jackpot

Email This Entry

Posted by Derek

Well, this got my attention: according to the Sunday Times, GlaxoSmithKline is preparing to hand out hefty bonus payments to scientists if they have a compound approved for sale. Hefty, in this context, means up to several million dollars. The earlier (and much smaller) payouts for milestones along the way will disappear, apparently, to be replaced by this jackpot.

The article says that "The company will determine who is entitled to share in the payout by judging which staff were key to its discovery and development", and won't that be fun? In Germany, the law is that inventors on a corporate patent do get a share of the profits, which can be quite lucrative, but it means that there are some very pointed exchanges about just who gets to be an inventor. The prospect of million-dollar bonuses will be very welcome, but will not bring the best in some people, either. (It's not clear to me, though, if these amounts are to be split up among people somehow, or if single individuals can possibly expect that much).

John LaMattina has some thoughts on this idea here. He's also wondering how to assign credit:

I am all for recognizing scientists in this way. After all, they must be successful in order for a company the size of GSK to have a sustaining pipeline. However, the drug R&D process is really a team effort and not driven by an individual. The inventor whose name is on the patent is generally the chemist or chemists who designed the molecule that had the necessary biological activity. Rarely, however, are chemists the major contributor to the program’s success. Oftentimes, it is a biologist who conceives the essence of the program by the scientific insight he or she might have. The discovery of Pfizer’s Xeljanz is such a case. There have been major classes of drugs that have been saved by toxicologists who ran insightful animal experiments to explain aberrant events in rats as was done by Merck with both the statins and proton-pump inhibitors – two of the biggest selling classes of drugs of all time.

On occasion, the key person in a drug program is the process chemist who has designed a synthesis of the drug that is amenable to the large scales of material needed to conduct clinical trials. Clinical trial design can also be crucial, particularly when studying a drug with a totally new mechanism of action. A faulty trial design can kill any program. Even a nurse involved in the testing of a drug can make the key discovery, as happened in Pfizer’s phase 1 program with Viagra, where the nurse monitoring the patients noticed that the drug was enhancing blood flow to an organ other than the heart. To paraphrase Hilary Clinton, it takes a village to discover and develop a drug.

You could end up with a situation where the battery is arguing with the drive shaft, both of whom are shouting at the fuel pump and refusing to speak to the tires, all because there was a reward for whichever one of them was the key to getting the car to go down the driveway.

There's another problem - getting a compound to go all the way to the market involves a lot of luck as well. No one likes to talk about that very much - it's in everyone's interest to show how it was really due to their hard work and intelligence - but equal amounts of hard work and brainpower go into projects that just don't make it. Those are necessary, but not sufficient. So if GSK is trying to put this up as an incentive, it's only partially coupled to factors that the people it's aimed at can influence.

And as LaMattina points out, the time delay in getting drugs approved is another factor. If I discover a great new compound today, I'll be lucky to see it on the market by, say, 2024 or so. I have no objection to someone paying me a million dollars on that date, but it won't have much to do with what I've been up to in the interim. And in many cases, some of the people you'd want to reward aren't even with the company by the time the drug makes it through, anyway. So while I cannot object to drug companies wanting to hand out big money to their scientists, I'm not sure what it will accomplish.

Comments (71) + TrackBacks (0) | Category: Business and Markets | Drug Development | Who Discovers and Why

July 11, 2013

The Last PPAR Compound?

Email This Entry

Posted by Derek

Roche has announced that they're halting trials of aleglitazar, a long-running investigational drug in their diabetes portfolio. I'm noting this because I think that this might be the absolute last of the PPAR ligands to fail in the clinic. And boy howdy, has it been a long list. Merck, Lilly, Kyorin, Bristol-Myers Squibb, Novo Nordisk, GlaxoSmithKline, and Bayer are just the companies I know right off the top of my head that have had clinical failures in this area, and I'm sure that there are plenty more. Some of those companies (GSK, for sure) have had multiple clinical candidates go down, so the damage is even worse than it appears.

That why I nominated this class in the Clinical Futility Awards earlier this summer. Three PPAR compounds actually made it to market, but the record has not been happy there, either. Troglitazone was pulled early, Avandia (rosiglitazone) has (after a strong start) been famously troubled, and Actos (pioglitazone) has its problems, too.

The thing is, no one knows about all this, unless they follow biomedical research in some detail. Uncounted billions have been washed through the grates; years and years of work involving thousands of people has come to nothing. The opportunity costs, in retrospect, are staggering. So much time, effort, and money could have been spent on something else, but there was no way to know that without spending it all. There never really is.

I return to this theme around here every so often, because I think it's an important one. The general public hears about the drugs that we get approved, because we make a big deal out of them. But the failures, for the most part, are no louder than the leaves falling from the trees. They pass unnoticed. Most people never knew about them at all, and the people who did know would rather move on to something else. But if you don't realize how many of these failures there are, and how much they cost, you can get a completely mistaken view of drug discovery. Sure, look at the fruit on the branches, on those rare occasions when some appears. But spare a glance at that expensive layer of leaves on the ground.

Comments (31) + TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity | Drug Development

June 19, 2013

The Drug Industry and the Obama Administration

Email This Entry

Posted by Derek

Over at Forbes, John Osborne adds some details to what has been apparent for some time now: the drug industry seems to have no particular friends inside the Obama administration:

Earlier this year I listened as a recently departed Obama administration official held forth on the industry and its rather desultory reputation. . .the substance of the remarks, and the apparent candor with which they were delivered, remain fresh in my mind, not least because of the important policy implications that the comments reflect.

. . .In part, there’s a lingering misimpression as to how new medicines are developed. While the NIH and its university research grantees make extraordinary discoveries, it is left to for-profit pharmaceutical and biotechnology companies to conduct the necessary large scale clinical studies and obtain regulatory approval prior to commercialization. Compare the respective annual spending totals: the NIH budget is around $30 billion, and the industry spends nearly double that amount. While the administration has great affection for universities, non-profit patient groups and government researchers (and it was admirably critical of the sequester’s meat cleaver impact on government sponsored research programs), it does not credit the essential role of industry in bringing discoveries from the bench to the bedside.

Terrific. I have to keep reminding myself how puzzled I was when I first came across the "NIH and universities discover all the drugs" mindset, but repeated exposures to it over the last few years have bred antibodies. If anyone from the administration would like to hear what someone who is not a lobbyist, not a CEO, not running for office, and has actually done this sort of work has to say about the topic, well, there are plenty of posts on this blog to refer to (and the comments sections to them are quite lively, too). In fact, I think I'll go ahead and link to a whole lineup of them - that way, when the topic comes up again, and it will, I can just send everyone here:

August 2012: A Quick Tour Through Drug Development Reality
May 2011: Maybe It Really Is That Hard?
March 2011: The NIH Goes For the Gusto
Feb 2011: The NIH's New Drug Discovery Center: Heading Into the Swamp?
Nov 2010: Where Drugs Come From: The Numbers
August 2009: Just Give It to NIH
August 2009: Wasted Money, Wasted Time?
July 2009: Where Drugs Come From, and How. Once More, With A Roll of the Eyes
May 2009: The NIH Takes the Plunge
Sep 2007: Drugs From Where?
November 2005: University of Drug Discovery?
October 2005: The Great Divide
September 2004: The NIH in the Clinic
September 2004: One More On Basic Research and the Clinic
September 2004: A Real-World Can O' Worms
September 2004: How Much Basic Research?
September 2004: How It Really Works

There we go - hours of reading, and all in the service of adding some reality to what is often a discussion full of unicorn burgers. Back to Osborne's piece, though - he goes on to make the point that one of the other sources of trouble with the administration is that the drug industry has continued to be profitable during the economic downturn, which apparently has engendered some suspicion.

And now for some 100-proof politics. The last of Osborne's contentions is that the administration (and many legislators as well) see the Medicare Part D prescription drug benefit as a huge windfall for the industry, and one that should be rolled back via a rebate program, setting prices back to what gets paid out under the Medicaid program instead. Ah, but opinions differ on this:

It’s useful to recall that former Louisiana Congressman and then PhRMA head Billy Tauzin negotiated with the White House in 2009 on behalf of the industry over this very question. Under the resulting deal, the industry agreed to support passage of the ACA and to make certain payments in the form of rebates and fees that amounted to approximately $80 billion over ten years; in exchange the administration agreed to resist those in Congress who pressed for more concessions from the drug companies or wanted to impose government price setting. . .

Tauzin's role, and the deal that he helped cut, have not been without controversy. I've always been worried about deals like this being subject to re-negotiations whenever it seems convenient, and those worries are not irrational, either:

. . .The White House believes that the industry would willingly (graciously? enthusiastically?) accept a new Part D outpatient drug rebate. Wow. The former official noted that the Simpson-Bowles deficit reduction panel recommended it, and its report was favorably endorsed by no less than House Speaker Boehner. Apparently, it is inconceivable to the White House that Boehner’s endorsement of the Simpson-Bowles platform would have occurred without the industry’s approval. Wow, again. That may be a perfectly logical assumption, but the other industry representatives within earshot never imagined that they had endorsed any such thing. No, it’s clear they have been under the (naïve) impression that the aforementioned $80 billion “contribution” was a very substantial sum in support of patients and the government treasury – and offered in a spirit of cooperation in recognition of the prospective benefits to industry of the expanded coverage that lies at the heart of Obamacare. With that said, the realization that this may be just the first of several installment payments left my colleagues in stunned silence; some mouths were visibly agape.

This topic came up late last year around here as well. And it'll come up again.

Comments (37) + TrackBacks (0) | Category: Academia (vs. Industry) | Current Events | Drug Development | Regulatory Affairs

June 18, 2013

Bernard Munos on The Last Twelve Years of Pharma

Email This Entry

Posted by Derek

Bernard Munos (ex-Lilly, now consulting) is out with a paper reviewing the approved drugs from 2000 to 2012. What's the current state of the industry? Is the upturn in drug approvals over the last two years real, or an artifact? And is it enough to keep things going?

Over that twelve-year span, the average drug approvals ran at 27 per year. Half of all the new drugs were in three therapeutic areas: cancer, infectious disease, and CNS. And as far as mechanisms go, there were about 190 different ones, by Munos' count. The most crowded category was (as might have been guessed) the 17 tyrosine kinase inhibitors, but 85% of the mechanisms were used by only one or two drugs, which is a long tail indeed.

Half those mechanisms were novel - that is, they were not represented by drugs approved before 2000. Coming up behind these first-in-class mechanisms were 29 follow-on drugs during this period, with an average gap of just under three years between the first and second drugs. What that tells you is that the follower programs were started at either about the same time as the first-in-class compounds (and had a slightly longer path through development), or were started at the first opportunity once the other program or mechanism became known. This means that they were started on very nearly the same risk basis as the original program: a three-year gap is not enough to validate much for a new mechanism, other than the fact that another organization thinks that it's worth working on, too. (Don't laugh at that one - there are research department that seem to live only for this validation, and regard their own first-in-class ideas with fear and suspicion).

Overall, though, Munos says that that fast-follower approach doesn't seem to be very effective, or not any more, given that few targets seem to be yielding more than one or two drugs. And as just mentioned, the narrow gap between first and second drugs also suggests that the risk-lowering effect of this strategy isn't very impressive, either.

Here's another interesting/worrisome point:

The long tail (of the mode-of-action curve). . . suggests that pharmaceutical innovation is a by-product of exploration, and not the result of pursuing a limited set of mechanisms, reflecting, for instance, a company’s marketing priorities. Put differently, there does not seem to be enough mechanisms able to yield multiple drugs, to support an industry. . .The last couple of years have seen an encouraging rise in new drug approvals, including many based on novel modes of action. However that surge has benefited companies unequally, with the top 12 pharmaceutical companies only garnering 25 out of 68 NMEs (37%). This is not enough to secure their future.

Looking at what many (most?) of the big companies are going through right now, it's hard to argue with that point of view. The word "secure" does not appear within any short character length of "future" when you look through the prospects for Lilly, AstraZeneca, and others.

Note also that part about how what a drug R&D operation finds isn't necessarily what it was looking for. That doesn't mesh well with some models of managment:

The drug hunter’s freedom to roam, and find innovative translational opportunities wherever they may lie is an essential part of success in drug research. This may help explain the disappointing performance of the programmatic approaches to drug R&D, that have swept much of the industry in the last 15 years. It has important managerial implications because, if innovation cannot be ordained, pharmaceutical companies need an adaptive – not directive – business model.

But if innovation cannot be ordained, why does a company need lots of people in high positions to ordain it, each with his or her own weekly meeting and online presentations database for all the PowerPoint slides? It's a head-scratcher of a problem, isn't it?

Comments (29) + TrackBacks (0) | Category: Drug Development | Drug Industry History

May 16, 2013

The Atlantic on Drug R&D

Email This Entry

Posted by Derek

"Can you respond to this tripe?" asked one of the emails that sent along this article in The Atlantic. I responded that I was planning to, but that things were made more complicated by my being extensively quoted in said tripe. Anyway, here goes.

The article, by Brian Till of the New America Foundation, seems somewhat confused, and is written in a confusing manner. The title is "How Drug Companies Keep Medicine Out of Reach", but the focus is on neglected tropical diseases, not all medicine. Well, the focus is actually on a contested WHO treaty. But the focus is also on the idea of using prizes to fund research, and on the patent system. And the focus is on the general idea of "delinking" R&D from sales in the drug business. Confocal prose not having been perfected yet, this makes the whole piece a difficult read, because no matter which of these ideas you're waiting to hear about, you end up having a long wait while you work your way through the other stuff. There are any number of sentences in this piece that reference "the idea" and its effects, but there is no sentence that begins with "Here's the idea"

I'll summarize: the WHO treaty in question is as yet formless. There is no defined treaty to be debated; one of the article's contentions is that the US has blocked things from even getting that far. But the general idea is that signatory states would commit to spending 0.01% of GDP on neglected diseases each year. Where this money goes is not clear. Grants to academia? Setting up new institutes? Incentives to commercial companies? And how the contributions from various countries are to be managed is not clear, either: should Angola (for example) pool its contributions with other countries (or send them somewhere else outright), or are they interested in setting up their own Angolan Institute of Tropical Disease Research?

The fuzziness continues. You will read and read through the article trying to figure out what happens next. The "delinking" idea comes in as a key part of the proposed treaty negotiations, with the reward for discovery of a tropical disease treatment coming from a prize for its development, rather than patent exclusivity. But where that money comes from (the GDP-linked contributions?) is unclear. Who sets the prize levels, at what point the money is awarded, who it goes to: hard to say.

And the "Who it goes to" question is a real one, because the article says that another part of the treaty would be a push for open-source discovery on these diseases (Matt Todd's malaria efforts at Sydney are cited). This, though, is to a great extent a whole different question than the source-of-funds one, or the how-the-prizes-work one. Collaboration on this scale is not easy to manage (although it might well be desirable) and it can end up replacing the inefficiencies of the marketplace with entirely new inefficiencies all its own. The research-prize idea seems to me to be a poor fit for the open-collaboration model, too: if you're putting up a prize, you're saying that competition between different groups will spur them on, which is why you're offering something of real value to whoever finishes first and/or best. But if it's a huge open-access collaboration, how do you split up the prize, exactly?

At some point, the article's discussion of delinking R&D and the problems with the current patent model spread fuzzily outside the bounds of tropical diseases (where there really is a market failure, I'd say) and start heading off into drug discovery in general. And that's where my quotes start showing up. The author did interview me by phone, and we had a good discussion. I'd like to think that I helped emphasize that when we in the drug business say that drug discovery is hard, that we're not just putting on a show for the crowd.

But there's an awful lot of "Gosh, it's so cheap to make these drugs, why are they so expensive?" in this piece. To be fair, Till does mention that drug discovery is an expensive and risky undertaking, but I'm not sure that someone reading the article will quite take on board how expensive and how risky it is, and what the implications are. There's also a lot of criticism of drug companies for pricing their products at "what the market will bear", rather than as some percentage of what it cost to discover or make them. This is a form of economics I've criticized many times here, and I won't go into all the arguments again - but I will ask:what other products are priced in such a manner? Other than what customers will pay for them? Implicit in these arguments is the idea that there's some sort of reasonable, gentlemanly profit that won't offend anyone's sensibilities, while grasping for more than that is just something that shouldn't be allowed. But just try to run an R&D-driven business on that concept. I mean, the article itself details the trouble that Eli Lilly, AstraZeneca, and others are facing with their patent expirations. What sort of trouble would they be in if they'd said "No, no, we shouldn't make such profits off our patented drugs. That would be indecent." Even with those massive profits, they're in trouble.

And that brings up another point: we also get the "Drug companies only spend X pennies per dollar on R&D". That's the usual response to pointing out situations like Lilly's; that they took the money and spent it on fleets of yachts or something. The figure given in the article is 16 cents per dollar of revenue, and it's prefaced by an "only". Only? Here, go look at different industries, around the world, and find one that spends more. By any industrial standard, we are plowing massive amounts back into the labs. I know that I complain about companies doing things like stock buybacks, but that's a complaint at the margin of what is already pretty impressive spending.

To finish up, here's one of the places I'm quoted in the article:

I asked Derek Lowe, the chemist and blogger, for his thoughts on the principle of delinking R&D from the actual manufacture of drugs, and why he thought the industry, facing such a daunting outlook, would reject an idea that could turn fallow fields of research on neglected diseases into profitable ones. "I really think it could be viable," he said. "I would like to see it given a real trial, and neglected diseases might be the place to do it. As it is, we really already kind of have a prize model in the developed countries, market exclusivity. But, at the same time, you could look at it and it will say, 'You will only make this amount of money and not one penny more by curing this tropical disease.' Their fear probably is that if that model works great, then we'll move on to all the other diseases."

What you're hearing is my attempt to bring in the real world. I think that prizes are, in fact, a very worthwhile thing to look into for market failures like tropical diseases. There are problems with the idea - for one thing, the prize payoff itself, compared with the time and opportunity cost, is hard to get right - but it's still definitely worth thinking about. But what I was trying to tell Brian Till was that drug companies would be worried (and rightly) about the extension of this model to all other disease areas. Wrapped up in the idea of a research-prize model is the assumption that someone (a wise committee somewhere) knows just what a particular research result is worth, and can set the payout (and afterwards, the price) accordingly. This is not true.

There's a follow-on effect. Such a wise committees might possibly feel a bit of political pressure to set those prices down to a level of nice and cheap, the better to make everyone happy. Drug discovery being what it is, it would take some years before all the gears ground to a halt, but I worry that something like this might be the real result. I find my libertarian impulses coming to the fore whenever I think about this situation, and that prompts me to break out an often-used quote from Robert Heinlein:

Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded — here and there, now and then — are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty.

This is known as "bad luck."

Comments (44) + TrackBacks (0) | Category: Drug Development | Drug Prices | Why Everyone Loves Us

April 2, 2013

Tecfidera's Price

Email This Entry

Posted by Derek

Let us take up the case of Tecfidera, the new Biogen/Idec drug for multiple sclerosis, known to us chemists as dimethyl fumarate. It joins the (not very long) list of industrial chemicals (the kind that can be purchased in railroad-car sizes) that are also approved pharmaceuticals for human use. The MS area has seen this before, interestingly.

A year's supply of Tecfidera will set you (or your insurance company) back $54,900. That's a bit higher than many analysts were anticipating, but that means "a bit higher over $50,000". The ceiling is about $60,000, which is what Novartis's Gilenya (fingolomod) goes for, and Biogen wanted to undercut them a bit. So, 55 long ones for a year's worth of dimethyl fumarate pills - what should one think about that?

Several thoughts come to mind, the first one being (probably) "Fifty thousand dollars for a bunch of dimethyl fumarate? Who's going to stand for that?" But we have an estimate for the second part of that question - Biogen thinks that quite a few people are going to stand for it, rather than stand for fingolomod. I'm sure they've devoted quite a bit of time and effort into thinking about that price, and that it's their best estimate of maximum profit. How, exactly, do they get away with that? Simple. They get away with it because they were willing to take the compound through clinical trials in MS patients, find out if it's tolerated and if it's efficacious, figure out the dosing regimen, and get it approved for this use by the FDA. If you or I had been willing to do that, and had been able to round up the money and resources, then we would also have the ability to charge fifty grand a year for it (or whatever we thought fit, actually).

What, exactly, gave them the idea that dimethyl fumarate might be good for multiple sclerosis? As it turns out, a German physician described its topical use for psoriasis back in 1959, and a formation of the compound as a cream (along with some monoesters) was eventually studied clinically by a small company in Switzerland called Fumapharm. This went on the market in Germany in the early 1990s, but the company did not have either the willingness or desire to extend their idea outside that region. But since dimethyl fumarate appears to work on psoriasis by modulating the immune system somehow, it did occur to someone that it might also be worth looking at in multiple sclerosis. Biogen began developing dimethyl fumarate for that purpose with Fumapharm, and eventually bought them outright in 2006 as things began to look more promising.

In other words, the connection of dimethyl fumarate as a possible therapy for MS had been out there, waiting to be made, since before many of us were born. Generations of drug developers had their chances to see it. Every company in the business had a chance to get interested in Fumapharm back in the late 80s and early 90s. But Biogen did, and in 2013 that move has paid off.

Now we come to two more questions, the first of which is "Should that move be paying off quite so lucratively?" But who gets to decide? Watching people pay fifty grand for a year's supply of dimethyl fumarate is not, on the face of it, a very appealing sight. At least, I don't find it so. But on the other hand, cost-of-goods is (for small molecules) generally not a very large part of the expense of a given pill - a rule of thumb is that such expenses should certainly be below 5% of a drug's selling price, and preferably less than 2%. It's just that it's even less in this case, and Biogen also has fewer worries about their supply chain, presumably. The fact this this drug is dimethyl fumarate is a curiosity (and perhaps an irritating one), but that lowers Biogen's costs by a couple of thousand a year per patient compared to some other small molecule. The rest of the cost of Tecfidera has nothing to do with what the ingredients are - it's all about what Biogen had to pay to get it on the market, and (most importantly) what the market will bear. If insurance companies believe that paying fifty thousand a year for the drug is a worthwhile expense, the Biogen will agree with them, too.

The second question is divorced from words like "should", and moves to the practical question of "can". The topical fumarate drug in Europe apparently had fairly wide "homebrew" use among psoriasis patients in other countries, and one has to wonder just a bit about that happening with Tacfidera. Biogen Idec certainly has method-of-use patents, but not composition-of-matter, so it's going to be up to them to try to police this. I found the Makena situation more irritating than this one (and the colchicine one, too), because in those cases, the exact drugs for the exact indications had already been on the market. (Dimethyl fumarate was not a drug for MS until Biogen proved it so, by contrast). But KV Pharmaceuticals had to go after people who were compounding the drug, anyway, and I have to wonder if a secondary market in dimethyl fumarate might develop. I don't know the details of its formulation (and I'm sure that Biogen will make much of it being something that can't be replicated in a basement), but there will surely be people who try it.

Comments (58) + TrackBacks (0) | Category: Drug Development | Drug Prices | The Central Nervous System

March 29, 2013

Sirtuins Live On at GSK

Email This Entry

Posted by Derek

Well, GSK is shutting down the Sirtris operation in Cambridge, but sirtuins apparently live on. I'm told that the company is advertising for chemists and biologists to come to Pennsylvania to staff the effort, and in this market, they'll have plenty of takers. We'll have the sirtuin drug development saga with us for a while yet. And I'm glad, actually, and no, not just because it gives me something to write about. I'd like to know what sirtuins actually are capable of doing in humans, and I'd like to see a drug or two come out of this. What the odds of that are, though, I couldn't say. . .

Comments (18) + TrackBacks (0) | Category: Drug Development

March 27, 2013

A Therapy Named After You?

Email This Entry

Posted by Derek

Back last fall I wrote about Prof. Magnus Essand and his oncoloytic virus research. He's gotten a good amount of press coverage, and has been trying all sorts of approaches to get further work funded. But here's one that I hadn't thought of: Essand and his co-workers are willing to name the therapy after anyone who can pony up the money to get it into a 20-patient human trial.

The more I think about that, the less problem I have with it. This looks at first like a pure angel investor move, and if people want to take a crack at something like this with their own cash, let them do the due diligence and make the call. Actually, Essand believes that his current virus is unpatentable (due to prior publication), so this is less of an a angel investment and more sheer philanthropy. But I have no objections at all to that, either.

Update: here's more on the story.

Comments (12) + TrackBacks (0) | Category: Cancer | Clinical Trials | Drug Development

The DNA-Encoded Library Platform Yields A Hit

Email This Entry

Posted by Derek

I wrote here about DNA-barcoding of huge (massively, crazily huge) combichem libraries, a technology that apparently works, although one can think of a lot of reasons why it shouldn't. This is something that GlaxoSmithKline bought by acquiring Praecis some years ago, and there are others working in the same space.

For outsiders, the question has long been "What's come out of this work?" And there is now at least one answer, published in a place where one might not notice it: this paper in Prostaglandins and Other Lipid Mediators. It's not a journal whose contents I regularly scan. But this is a paper from GSK on a soluble epoxide hydrolase inhibitor, and therein one finds:

sEH inhibitors were identified by screening large libraries of drug-like molecules, each attached to a DNA “bar code”, utilizing DNA-encoded library technology [10] developed by Praecis Pharmaceuticals, now part of GlaxoSmithKline. The initial hits were then synthesized off of DNA, and hit-to-lead chemistry was carried out to identify key features of the sEH pharmacophore. The lead series were then optimized for potency at the target, selectivity and developability parameters such as aqueous solubility and oral bioavailability, resulting in GSK2256294A. . .

That's the sum of the med-chem in the article, which certainly compresses things, and I hope that we see a more complete writeup at some point from a chemistry perspective. Looking at the structure, though, this is a triaminotriazine-derived compound (as in the earlier work linked to in the first paragraph), so yes, you apparently can get interesting leads that way. How different this compound is from the screening hit is a good question, but it's noteworthy that a diaminotriazine's worth of its heritage is still present. Perhaps we'll eventually see the results of the later-generation chemistry (non-triazine).

Comments (12) + TrackBacks (0) | Category: Chemical Biology | Chemical News | Drug Assays | Drug Development

The NIH, Pfizer, and Senator Wyden

Email This Entry

Posted by Derek

Senator Ron Wyden (D-Oregon) seems to be the latest champion of the "NIH discovers drugs and Pharma rips them off" viewpoint. Here's a post from John LaMattina on Wyden's recent letter to Francis Collins. The proximate cause of all this seems to be the Pfizer JAK3 inhibitor:

Tofacitinib (Xeljanz), approved last November by the U.S. Food and Drug Administration, is nearing the market as the first oral medication for the treatment of rheumatoid arthritis. Given that the research base provided by the National Institutes of Health (NIH) culminated in the approval of Xeljanz, citizens have the right to be concerned about the determination of its price and what return on investment they can expect. While it is correct that the expenses of drug discovery and preclinical and clinical development were fully undertaken by Pfizer, taxpayer-funded research was foundational to the development of Xeljanz.

I think that this is likely another case where people don't quite realize the steepness of the climb between "X looks like a great disease target" and "We now have an FDA-approved drug targeting X". Here's more from Wyden's letter:

Developing drugs in America remains a challenging business, and NIH plays a critically important role by doing research that might not otherwise get done by the private sector. My bottom line: When taxpayer-funded research is commercialized, the public deserves a real return on its investment. With the price of Xeljanz estimated at about $25,000 a year and annual sales projected by some industry experts as high as $2.5 billion, it is important to consider whether the public investment has assured accessibility and affordability.

This is going to come across as nastier than I intend it to, but my first response is that the taxpayer's return on this was that they got a new drug where there wasn't one before. And via the NIH-funded discoveries, the taxpayers stimulated Pfizer (and many other companies) to spend huge amounts of money and effort to turn the original discoveries in the JAK field into real therapies. I value knowledge greatly, but no human suffering whatsoever was relieved by the knowledge alone that JAK3 appeared to play a role in inflammation. What was there was the potential to affect the lives of patients, and that potential was realized by Pfizer spending its own money.

And not just Pfizer. Let's not forget that the NIH entered into research agreements with many other companies, and that the list of JAK3-related drug discovery projects is a long one. And keep in mind that not all of them, by any means, have ever earned a nickel for the companies involved, and that many of them never will. As for Pfizer, Xeljanz has been on the market for less than six months, so it's too early to say how the drug will do. But it's not a license to print money, and is in a large, extremely competitive market. And should it run into trouble (which I certainly hope doesn't happen), I doubt if Senator Wyden will be writing letters seeking to share some of the expenses.

Comments (35) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Drug Prices | Regulatory Affairs

March 26, 2013

Automated Med-Chem, At Last?

Email This Entry

Posted by Derek

I've written several times about flow chemistry here, and a new paper in J. Med. Chem. prompts me to return to the subject. This, though, is the next stage in flow chemistry - more like flow med-chem:

Here, we report the application of a flow technology platform integrating the key elements of structure–activity relationship (SAR) generation to the discovery of novel Abl kinase inhibitors. The platform utilizes flow chemistry for rapid in-line synthesis, automated purification, and analysis coupled with bioassay. The combination of activity prediction using Random-Forest regression with chemical space sampling algorithms allows the construction of an activity model that refines itself after every iteration of synthesis and biological result.

Now, this is the point at which people start to get either excited or fearful. (I sometimes have trouble telling the difference, myself). We're talking about the entire early-stage optimization cycle here, and the vision is of someone topping up a bunch of solvent reservoirs, hitting a button, and leaving for the weekend in the expectation of finding a nanomolar compound waiting on Monday. I'll bet you could sell that to AstraZeneca for some serious cash, and to be fair, they're not the only ones who would bite, given a sufficiently impressive demo and slide deck.

But how close to this Lab of the Future does this work get? Digging into the paper, we have this:

Initially, this approach mirrors that of a traditional hit-to-lead program, namely, hit generation activities via, for example, high-throughput screening (HTS), other screening approaches, or prior art review. From this, the virtual chemical space of target molecules is constructed that defines the boundaries of an SAR heat map. An initial activity model is then built using data available from a screening campaign or the literature against the defined biological target. This model is used to decide which analogue is made during each iteration of synthesis and testing, and the model is updated after each individual compound assay to incorporate the new data. Typically the coupled design, synthesis, and assay times are 1–2 h per iteration.

Among the key things that already have to be in place, though, are reliable chemistry (fit to generate a wide range of structures) and some clue about where to start. Those are not givens, but they're certainly not impossible barriers, either. In this case, the team (three UK groups) is looking for BCL-Abl inhibitors, a perfectly reasonable test bed. A look through the literature suggested coupling hinge-binding motifs to DFG-loop binders through an acetylene linker, as in Ariad's ponatinib. This, while not a strategy that will earn you a big raise, is not one that's going to get you fired, either. Virtual screening around the structure, followed by eyeballing by real humans, narrowed down some possibilities for new structures. Further possibilities were suggested by looking at PDB structures of homologous binding sites and seeing what sorts of things bound to them.

So already, what we're looking at is less Automatic Lead Discovery than Automatic Patent Busting. But there's a place for that, too. Ten DFG pieces were synthesized, in Sonogashira-couplable form, and 27 hinge-binding motifs with alkynes on them were readied on the other end. Then they pressed the button and went home for the weekend. Well, not quite. They set things up to try two different optimization routines, once the compounds were synthesized, run through a column, and through the assay (all in flow). One will be familiar to anyone who's been in the drug industry for more than about five minutes, because it's called "Chase Potency". The other one, "Most Active Under Sampled", tries to even out the distributions of reactants by favoring the ones that haven't been used as often. (These strategies can also be mixed). In each case, the model was seeded with binding constants of literature structures, to get things going.

The first run, which took about 30 hours, used the "Under Sampled" algorithm to spit out 22 new compounds (there were six chemistry failures) and a corresponding SAR heat map. Another run was done with "Chase Potency" in place, generating 14 more compounds. That was followed by a combined-strategy run, which cranked out 28 more compounds (with 13 failures in synthesis). Overall, there were 90 loops through the process, producing 64 new products. The best of these were nanomolar or below.

But shouldn't they have been? The deck already has to be stacked to some degree for this technique to work at all in the present stage of development. Getting potent inhibitors from these sorts of starting points isn't impressive by itself. I think the main advantage to this is the time needed to generated the compound and the assay data. Having the synthesis, purification, and assay platform all right next to each other, with compound being pumped right from one to the other, is a much tighter loop than the usual drug discovery organization runs. The usual, if you haven't experienced it, is more like "Run the reaction. Work up the reaction. Run it through a column (or have the purification group run it through a column for you). Get your fractions. Evaporate them. Check the compound by LC/MS and NMR. Code it into the system and get it into a vial. Send it over to the assay folks for the weekly run. Wait a couple of days for the batch of data to be processed. Repeat."

The science-fictional extension of this is when we move to a wider variety of possible chemistries, and perhaps incorporate the modeling/docking into the loop as well, when it's trustworthy enough to do so. Now that would be something to see. You come back in a few days and find that the machine has unexpectedly veered off into photochemical 2+2 additions with a range of alkenes, because the Chase Potency module couldn't pass up a great cyclobutane hit that the modeling software predicted. And all while you were doing something else. And that something else, by this point, is. . .what, exactly? Food for thought.

Comments (16) + TrackBacks (0) | Category: Chemical News | Drug Assays | Drug Development

March 21, 2013

AstraZeneca Makes a Deal With Moderna. Wait, Who?

Email This Entry

Posted by Derek

AstraZeneca has announced another 2300 job cuts, this time in sales and administration. That's not too much of a surprise, as the cuts announced recently in R&D make it clear that the company is determined to get smaller. But their overall R&D strategy is still unclear, other than "We can't go on like this", which is clear enough.

One interesting item has just come out, though. The company has done a deal with Moderna Therapeutics of Cambridge (US), a relatively new outfit that's trying something that (as far as I know) no one else has had the nerve to. Moderna is trying to use messenger RNAs as therapies, to stimulate the body's own cells to produce more of some desired protein product. This is the flip side of antisense and RNA interference, where you throw a wrench into the transcription/translation machinery to cut down on some protein. Moderna's trying to make the wheels spin in the other direction.

This is the sort of idea that makes me feel as if there are two people inhabiting my head. One side of me is very excited and interested to see if this approach will work, and the other side is very glad that I'm not one of the people being asked to do it. I've always thought that messing up or blocking some process was an easier task than making it do the right thing (only more so), and in this case, we haven't even reliably shown that blocking such RNA pathways is a good way to a therapy.

I also wonder about the disease areas that such a therapy would treat, and how amenable they are to the approach. The first one that occurs to a person is "Allow Type I diabetics to produce their own insulin", but if your islet cells have been disrupted or killed off, how is that going to work? Will other cell types recognize the mRNA-type molecules you're giving, and make some insulin themselves? If they do, what sort of physiological control will they be under? Beta-cells, after all, are involved in a lot of complicated signaling to tell them when to make insulin and when to lay off. I can also imagine this technique being used for a number of genetic disorders, where we know what the defective protein is and what it's supposed to be. But again, how does the mRNA get to the right tissues at the right time? Protein expression is under so many constraints and controls that it seems almost foolhardy to think that you could step in, dump some mRNA on the process, and get things to work the way that you want them to.

But all that said, there's no substitute for trying it out. And the people behind Moderna are not fools, either, so you can be sure that these questions (and many more) have crossed their minds already. (The company's press materials claim that they've addressed the cellular-specificity problem, for example). They've gotten a very favorable deal from AstraZeneca - admittedly a rather desperate company - but good enough that they must have a rather convincing story to tell with their internal data. This is the very picture of a high-risk, high-reward approach, and I wish them success with it. A lot of people will be watching very closely.

Comments (37) + TrackBacks (0) | Category: Biological News | Business and Markets | Drug Development

March 19, 2013

Affymax In Trouble

Email This Entry

Posted by Derek

Affymax has had a long history, and it's rarely been dull. The company was founded in 1988, back in the very earliest flush of the Combichem era, and in its early years it (along with Pharmacopeia) was what people thought of when they thought of that whole approach. Huge compound libraries produced (as much as possible) by robotics, equally huge screening efforts to deal with all those compounds - this stuff is familiar to us now (all too familiar, in many cases), but it was new then. If you weren't around for it, you'll have to take the word of those who were that it could all be rather exciting and scary at first: what if the answer really was to crank out huge piles of amides, sulfonamides, substituted piperazines, aminotriazines, oligopeptides, and all the other "build-that-compound-count-now!" classes? No one could say for sure that it wasn't. Not yet.

Glaxo bought Affymax back in 1995, about the time they were buying Wellcome, which makes it seem like a long time ago, and perhaps it was. At any rate, they kept the combichem/screening technology and spun a new version of Affymax back out in 2001 to a syndicate of investors. For the past twelve years, that Affymax has been in the drug discovery and development business on its own.

And as this page shows, the story through most of those years has been peginesatide (brand name Omontys, although it was known as Hematide for a while as well). This is synthetic peptide (with some unnatural amino acids in it, and a polyethylene glycol tail) that mimics erythropoetin. What with its cyclic nature (a couple of disulfide bonds), the unnatural residues, and the PEGylation, it's a perfect example of what you often have to do to make an oligopeptide into a drug.

But for quite a while there, no one was sure whether this one was going to be a drug or not. Affymax had partnered with Takeda along the way, and in 2010 the companies announced some disturbing clinical data in kidney patients. While Omontys did seem to help with anemia, it also seemed to have a worse safety profile than Amgen's EPO, the existing competition. The big worry was cardiovascular trouble (which had also been a problem with EPO itself and all the other attempted competition in that field). A period of wranging ensued, with a lot of work on the clinical data and a lot of back-and-forthing with the FDA. In the end, the drug was actually approved one year ago, albeit with a black-box warning about cardiovascular safety.

But over the last year, about 25,000 patients got the drug, and unfortunately, 19 of them had serious anaphylactic reactions to it within the first half hour of exposure. Three patients died as a result, and some others nearly did. That is also exactly what one worries about with a synthetic peptide derivative: it's close enough to the real protein to do its job, but it's different enough to set off the occasional immune response, and the immune system can be very serious business indeed. Allergic responses had been noted in the clinical trials, but I think that if you'd taken bets last March, people would have picked the cardiovascular effects as the likely nemesis, not anaphylaxis. But that's not how it's worked out.

Takeda and Affymax voluntarily recalled the drug last month. And that looked like it might be all for the company, because this has been their main chance for some years now. Sure enough, the announcement has come that most of the employees are being let go. And it includes this language, which is the financial correlate of Cheyne-Stokes breathing:

The company also announced that it will retain a bank to evaluate strategic alternatives for the organization, including the sale of the company or its assets, or a corporate merger. The company is considering all possible alternatives, including further restructuring activities, wind-down of operations or even bankruptcy proceedings.

I'm sorry to hear it. Drug development is very hard indeed.

Comments (11) + TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Drug Development | Drug Industry History | Toxicology

March 18, 2013

GlaxoSmithKline's CEO on the Price of New Drugs

Email This Entry

Posted by Derek

Well, GlaxoSmithKline CEO Andrew Witty has made things interesting. Here he is at a recent conference in London when the topic of drug pricing came up:

. . . Witty said the $1 billion price tag was "one of the great myths of the industry", since it was an average figure that includes money spent on drugs that ultimately fail.

In the case of GSK, a major revamp in the way research is conducted means the rate of return on R&D investment has increased by about 30 percent in the past three or four years because fewer drugs have flopped in late-stage testing, he said.

"If you stop failing so often you massively reduce the cost of drug development ... it's why we are beginning to be able to price lower," Witty said.

"It's entirely achievable that we can improve the efficiency of the industry and pass that forward in terms of reduced prices."

I have a feeling that I'm going to be hearing "great myths of the industry" in my email for some time, thanks to this speech, so I'd like to thank Andrew Witty for that. But here's what he's trying to get across: if you start research on for a new drug, name a clinical candidate, take it to human trials and are lucky enough to have it work, then get it approved by the FDA, you will not have spent one billion dollars to get there. That, though, is the figure for a single run-through when everything works. If, on the other hand, you are actually running a drug company, with many compounds in development, and after a decade or so you total up all the money you've spent, versus the number of drugs you got onto the market, well, then you may well average a billion dollars per drug. That's because so many of them wipe out in the clinic; the money gets spent and you get no return at all.

That's the analysis that Matthew Herper did here (blogged about here), and that same Reuters article makes reference to a similar study done by Deloitte (and Thomson Reuters!) that found that the average cost of a new drug is indeed about $1.1 billion when you have to pay for the failures.

And believe me, we have to pay for them. A lottery ticket may only cost a dollar, but by the time you've won a million dollars playing the lottery, you will have bought a lot of losing tickets. In fact, you'll have bought far more than a million dollar's worth, or no state would run a lottery, but that's a negative-expectations game, while drug research (like any business) is supposed to be positive-expectations. Is it? Just barely, according to that same Deloitte study:

In effect, the industry is treading water in the fight to deliver better returns on the billions of dollars ploughed into the hunt for new drugs each year.

With an average internal rate of return (IRR) from R&D in 2012 of 7.2 percent - against 7.7 percent and 10.5 percent in the two preceding years - Big Pharma is barely covering its average cost of capital, estimated at around 7 percent.

Keep that in mind next time you hear about how wonderfully profitable the drug business is. And those are still better numbers than Morgan Stanley had a couple of years before, when they estimated that our internal returns probably weren't keeping up with our cost of capital at all. (Mind you, it seems that their analysis may have been a bit off, since they used their figures to recommend an "Overweight" on AstraZeneca shares, a decision that looked smart for a few months, but one that a person by now would have regretted deeply).

But back to Andrew Witty. What he's trying to say is that it doesn't have to cost a billion dollars per drug, if you don't fail so often, and he's claiming that GSK is starting to fail less often. True, or not? The people I know at the company aren't exactly breaking out the party hats, for what that's worth, and it looks like the company's might have to add the entire Sirtris investment to the "sunk cost" pile. Overall, I think it's too soon to call any corners as having been turned, even if GSK does turn out to have been doing better. Companies can have runs of good fortune and bad, and the history of the industry is absolutely littered with the press releases of companies who say that they've Turned A New Page of Success and will now be cranking out the wonder drugs like nobody's business. If they keep it up, GSK will have plenty of chances to tell us all about it.

Now, one last topic. What about Witty's statement that this new trend to success will allow drug prices themselves to come down? That's worth thinking about all by itself, on several levels - here are my thoughts, in no particular order:

(1) To a first approximation, that's true. If you're selling widgets, your costs go down, you can cut prices, and you can presumably sell more widgets. But as mentioned above, I'm not yet convinced that GSK's costs are truly coming down yet. And see point three below, because GSK and the rest of us in this business are not, in fact, selling widgets.

(2) Even if costs are coming down, counterbalancing that are several other long-term trends, such as the low-hanging fruit problem. As we move into harder and harder sorts of targets and disease areas, I would assume that the success rate of drugs in the clinic will be hard pressed to improve. This is partly a portfolio management problem, and can be ameliorated and hedged against to some degree, but it is, I think, a long-term concern, unless we start to make some intellectual headway on these topics, and speed the day. On the other side of this balance are the various efforts to rationalize clinical trials and so on.

(3) A larger factor is that the market for innovative drugs is not very sensitive to price. This is a vast topic, covered at vast length in many places, but it comes down to there being (relatively) few entrants in any new therapeutic space, and to people, and governments, and insurance companies, being willing to spend relatively high amounts of money for human health. (The addition of governments into that list means also that various price-fixing schemes distort the market in all kinds of interesting ways as well). At any rate, price mechanisms don't work like classical econ-textbook widgets in the drug business.

So I'm not sure, really, how this will play out. GSK has only modest incentives to lower the prices of its drugs. Such a move won't, in many markets, allow them to sell more drugs to make up the difference on volume. And actually, the company will probably be able to offset some of the loss via the political capital that comes from talking about any such price changes. We might be seeing just that effect with Witty's speech.

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Prices

March 14, 2013

Does Baldness Get More Funding Than Malaria?

Email This Entry

Posted by Derek

OK, let's fact-check Bill Gates today, shall we?

Capitalism means that there is much more research into male baldness than there is into diseases such as malaria, which mostly affect poor people, said Bill Gates, speaking at the Royal Academy of Engineering's Global Grand Challenges Summit.

"Our priorities are tilted by marketplace imperatives," he said. "The malaria vaccine in humanist terms is the biggest need. But it gets virtually no funding. But if you are working on male baldness or other things you get an order of magnitude more research funding because of the voice in the marketplace than something like malaria."

Gates' larger point, that tropical diseases are an example of market failure, stands. But I don't think this example does. I have never yet worked on any project in industry that had anything to do with baldness, while I have actually touched on malaria. Looking around the scientific literature, I see many more publications on potential malaria drugs than I see potential baldness drugs (in fact, I'm not sure if I've ever seen anything on the latter, after minoxidil - and its hair-growth effects were discovered by accident during a cardiovascular program). Maybe I'm reading the wrong journals.

But then, Gates also seems to buy into the critical-shortage-of-STEM idea:

With regards to encouraging more students into STEM education, Gates said: "It's kind of surprising that we have such a deficit of people going into those fields. Look at where you can have the most interesting job that pays well and will have impact on society -- all three of those things line up to say science and engineering and yet in most rich countries we see decline. Asia is an exception."

The problem is, there aren't as many of these interesting, well-paying jobs around as there used to be. Any discussion of the STEM education issue that doesn't deal with that angle is (to say the least) incomplete.

Comments (28) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Infectious Diseases

March 13, 2013

Who to Manufacture an API?

Email This Entry

Posted by Derek

Here's a very practical question indeed, sent in by a reader:

After a few weeks of trying to run down a possible API manufacturer for our molecule, I am stuck. We have a straightforward proven synthesis of a 300 weight lipid and need only 10 kg for our trials. Any readers have suggestions? Alternately, we will do it ourselves and find someone to help us with the documentation. Suggestions that way, too?

That's worth asking: who's your go-to for things like this, a reliable contract supplier for high-quality material with all the documentation? I'll say up front that I don't know who's been contacted already, or why the search has been as difficult as it has, but I'll see if I can get more details. Suggestions welcome in the comments. . .

Update: this post has generated a lot of very sound advice. Anyone who's approaching this stage for the first time (as my correspondent clearly is) is looking at a significant expenditure for something that could make or break a small research effort. I'm putting this note up for people who find this post in future searches - read the comments; you'll be glad you did.

Comments (67) + TrackBacks (0) | Category: Drug Development

Getting Down to Protein-Protein Compounds

Email This Entry

Posted by Derek

Late last year I wrote about a paper that suggested that some "stapled peptides" might not work as well as advertised. I've been meaning to link to this C&E News article on the whole controversy - it's a fine overview of the area.

And that also gives me a chance to mention this review in Nature Chemistry (free full access). It's an excellent look at the entire topic of going after alpha-helix protein-protein interactions with small molecules. Articles like this really give you an appreciation for a good literature review - this information is scattered across the literature, and the authors here (from Leeds) have really done everyone interested in this topic a favor by collecting all of it and putting it into context.

As they say, you really have two choices if you're going after this sort of protein-protein interaction (well, three, if you count chucking the whole business and going to truck-driving school, but that option is not specific to this field). You can make something that's helical itself, so as to present the side chains in what you hope will be the correct orientation, or you can go after some completely different structure that just happens to arrange these groups into the right spots (but has no helical architecture itself).

Neither of these is going to lead to attractive molecules. The authors address this problem near the end of the paper, saying that we may be facing a choice here: make potent inhibitors of protein-protein interactions, or stay within Lipinski-guideline property space. Doing both at the same time just may not be possible. On the evidence so far, I think they're right. How we're going to get such things into cells, though, is a real problem (note this entry last fall on macrocyclic compounds, where the same concern naturally comes up). Since we don't seem to know much about why some compounds make it into cells and some don't, perhaps the way forward (for now) is to find a platform where as many big PPI candidates as possible can be evaluated quickly for activity (both in the relevant protein assay and then in cells). If we can't be smart enough, or not yet, maybe we can go after the problem with brute force.

With enough examples of success, we might be able to get a handle on what's happening. This means, though, that we'll have to generate a lot of complex structures quickly and in great variety, and if that's not a synthetic organic chemistry problem, I'd like to know what is. This is another example of a theme I come back to - that there are many issues in drug discovery that can only be answered by cutting-edge organic chemistry. We should be attacking these and making a case for how valuable the chemical component is, rather than letting ourselves be pigeonholed as a bunch of folks who run Suzuki couplings all day long and who might as well be outsourced to Fiji.

Comments (10) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics

February 8, 2013

All Those Drug-Likeness Papers: A Bit Too Neat to be True?

Email This Entry

Posted by Derek

There's a fascinating paper out on the concept of "drug-likeness" that I think every medicinal chemist should have a look at. It would be hard to count the number of publications on this topic over the last ten years or so, but what if we've been kidding ourselves about some of the main points?

The big concept in this area is, of course, Lipinski criteria, or Rule of Five. Here's what the authors, Peter Kenny and Carlos Montanari of the University of São Paulo, have to say:

No discussion of drug-likeness would be complete without reference to the influential Rule of 5 (Ro5) which is essentially a statement of property distributions for compounds taken into Phase II clinical trials. The focus of Ro5 is oral absorption and the rule neither quantifies the risks of failure associated with non-compliance nor provides guidance as to how sub-optimal characteristics of compliant compounds might be improved. It also raises a number of questions. What is the physicochemical basis of Ro50s asymmetry with respect to hydrogen bond donors and acceptors? Why is calculated octanol/water partition coefficient (ClogP) used to specify Ro50s low polarity limit when the high polarity cut off is defined in terms of numbers of hydrogen bond donors and acceptors? It is possible that these characteristics reflect the relative inability of the octanol/water partitioning system to ‘see’ donors (Fig. 1) and the likelihood that acceptors (especially as defined for Ro5) are more common than donors in pharmaceutically-relevant compounds. The importance of Ro5 is that it raised awareness across the pharmaceutical industry about the relevance of physico- chemical properties. The wide acceptance of Ro5 provided other researchers with an incentive to publish analyses of their own data and those who have followed the drug discovery literature over the last decade or so will have become aware of a publication genre that can be described as ‘retrospective data analysis of large proprietary data sets’ or, more succinctly, as ‘Ro5 envy’.

There, fellow med-chemists, doesn't this already sound like something you want to read? Thought so. Here, have some more:

Despite widespread belief that control of fundamental physicochemical properties is important in pharmaceutical design, the correlations between these and ADMET properties may not actually be as strong as is often assumed. The mere existence of a trend is of no interest in drug discovery and strengths of trends must be known if decisions are to be accurately described as data-driven. Although data analysts frequently tout the statistical significance of the trends that their analysis has revealed, weak trends can be statistically significant without being remotely interesting. We might be confident that the coin that lands heads up for 51 % of a billion throws is biased but this knowledge provides little comfort for the person charged with predicting the result of the next throw. Weak trends can be beaten and when powered by enough data, even the feeblest of trends acquires statistical significance.

So, where are the authors going with all this entertaining invective? (Not that there's anything wrong with that; I'm the last person to complain). They're worried that the transformations that primary drug property data have undergone in the literature have tended to exaggerate the correlations between these properties and the endpoints that we care about. The end result is pernicious:

Correlation inflation becomes an issue when the results of data analysis are used to make real decisions. To restrict values of properties such as lipophilicity more stringently than is justified by trends in the data is to deny one’s own drug-hunting teams room to maneuver while yielding the initiative to hungrier, more agile competitors.

They illustrate this by reference to synthetic data sets, showing how one can get rather different impressions depending on how the numbers are handled along the way. Representing sets of empirical points by using their average values, for example, can cause the final correlations to appear more robust than they really are. That, the authors say, is just what happened in this study from 2006 ("Can we rationally design promiscuous drugs?) and in this one from 2007 ("The influence of drug-like concepts on decision-making in medicinal chemistry"). The complaint is that showing a correlation between cLogP and median compound promiscuity does not imply that there is one between cLogP and compound promiscuity per se. And the authors note that the two papers manage to come to opposite conclusions about the effect of molecular weight, which does make one wonder. The "Escape from flatland" paper from 2009 and the "ADMET rules of thumb" paper from 2008 (mentioned here) also come in for criticism on this point - binning averaged data from a large continuous set and then treated those as real objects for statistic analysis. Ones conclusions depend strongly on how many bins one uses. Here's a specific take on that last paper:

The end point of the G2008 analysis is ‘‘a set of simple interpretable ADMET rules of thumb’’ and it is instructive to examine these more closely. Two classifications (ClogP<4 and MW<400 Da; ClogP>4 or MW>400 Da) were created and these were combined with the four ionization state classifications to define eight classes of compound. Each combination of ADMET property and compound class was labeled according to whether the mean value of the ADMET property was lower than, higher than or not significantly different from the average for all compounds. Although the rules of thumb are indeed simple, it is not clear how useful they are in drug discovery. Firstly, the rules only say whether or not differences are significant and not how large they are. Secondly, the rules are irrelevant if the compounds of interest are all in the same class. Thirdly, the rules predict abrupt changes in ADMET properties going from one class to another. For example, the rules predict significantly different aqueous solubility for two neutral compounds with MW of 399 and 401 Da, provided that their ClogP values do not exceed 4. It is instructive to consider how the rules might have differed had values of logP and MW of 5 and 500 Da (or 3 and 300 Da) had been used to define them instead of 4 and 400 Da.

These problems also occur in graphical representations of all these data, as you'd imagine, and the authors show several of these that they object to. A particular example is this paper from 2010 ("Getting physical in drug discovery"). Three data sets, whose correlations in their primary data do not vary significantly, generate very different looking bar charts. And that leads to this comment:

Both the MR2009 and HY2010 studies note the simplicity of the relationships that the analysis has revealed. Given that drug discovery would appear to be anything but simple, the simplicity of a drug-likeness model could actually be taken as evidence for its irrelevance to drug discovery. The number of aromatic rings in a molecule can be reduced by eliminating rings or by eliminating aromaticity and the two cases appear to be treated as equivalent in both the MR2009 and HY2010 studies. Using the mnemonic suggested in MR2009 one might expect to make a compound more developable by replacing a benzene ring with cyclohexadiene or benzoquinone.

The authors wind up by emphasizing that they're not saying that things like lipophilicity, aromaticity, molecular weight and so on are unimportant - far from it. What they're saying, though, is that we need to be aware of how strong these correlations really are so that we don't fool ourselves into thinking that we're addressing our problems, when we really aren't. We might want to stop looking for huge, universally applicable sets of rules and take what we can get in smaller, local data sets within a given series of compounds. The paper ends with a set of recommendations for authors and editors - among them, always making primary data sets part of the supplementary material, not relying on purely graphical representations to make statistical points, and a number of more stringent criteria for evaluating data that have been partitioned into bins. They say that they hope that their paper "stimulates debate", and I think it should do just that. It's certainly given me a lot of things to think about!

Comments (13) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | The Scientific Literature

February 7, 2013

Addex Cuts Back: An Old Story, Told Again

Email This Entry

Posted by Derek

Addex Therapeutics has been trying to develop allosteric modulators as drugs. That's a worthy goal (albeit a tough one) - "allosteric" is a term that covers an awful lot of ground. The basic definition is a site that affects the activity of its protein, but is separate from the active or ligand-binding site itself. All sorts of regulatory sites, cofactors, protein-protein interaction motifs, and who knows what else can fit into that definition. It's safe to say that allosteric mechanisms account for a significant number of head-scratching assay results, but unraveling them can be quite a challenge.

It's proving to be one for Addex. They've announced that they're going to focus on a few clinical programs, targeting orphan diseases in the major markets, and to do that, well. . .:

In executing this strategy and to maximize potential clinical success in at least two programs over the next 12 months, the company will reduce its overall cost structure, particularly around its early-stage discovery efforts, while maintaining its core competency and expertise in allosteric modulation. The result will be a development-focused company with a year cash runway. In addition, the company will seek to increase its cash position through non-dilutive partnerships by monetizing its platform capability as well as current discovery programs via licensing and strategic transactions.

That is the sound of the hatches being battened down. And that noise can be heard pretty often in the small-company part of the drug business. Too often, it comes down to "We can advance this compound in the clinic, enough to try to get more money from someone, or we can continue to do discovery research. But not both. Not now." Some companies have gone through this cycles several times, laying off scientists and then eventually hiring people back (sometimes some of the same people) when the money starts flowing again. But in the majority of these cases, I'd say that this turns out to be the beginning of the end. The failure rates in the clinic see to that - if you have to have your compounds work there, the very next ones you have, the only things you have on hand in order to survive, then the odds are not with you.

But that's what every small biopharma company faces: something has to work, or the money will run out. A lot of the managing of such an outfit consists of working out strategies to keep things going long enough. You can start from a better position than usual, if that's an option. You can pursue deals with larger companies early on, if you actually have something that someone might want (but you won't get as good a deal as you would have later, if what you're partnering actually works out). You can beat all sorts of bushes to raise cash, and try all sorts of techniques to keep it from being spent so quickly, or on the wrong things (as much as you can tell what those are).

But eventually, something has to work, or the music stops. Ditching everything except the clinical candidates is one of the last resorts, so I wish Addex good luck, which they (and all of us) will need.

Comments (14) + TrackBacks (0) | Category: Business and Markets | Drug Development

January 25, 2013

CETP, Alzheimer's, Monty Hall, and Roulette. And Goats.

Email This Entry

Posted by Derek

CETP, now there's a drug target that has incinerated a lot of money over the years. Here's a roundup of compounds I posted on back last summer, with links to their brutal development histories. I wondered here about what's going to happen with this class of compounds: will one ever make it as a drug? If it does, will it just end up telling us that there are yet more complications in human lipid handling that we didn't anticipate?

Well, Merck and Lilly are continuing their hugely expensive, long-running atempts to answer these questions. Here's an interview with Merck's Ken Frazier in which he sounds realistic - that is, nervous:

Merck CEO Ken Frazier, speaking in Davos on the sidelines of the World Economic Forum, said the U.S. drugmaker would continue to press ahead with clinical research on HDL raising, even though the scientific case so far remained inconclusive.

"The Tredaptive failure is another piece of evidence on the side of the scale that says HDL raising hasn't yet been proven," he said.

"I don't think by any means, though, that the question of HDL raising as a positive factor in cardiovascular health has been settled."

Tredaptive, of course, hit the skids just last month. And while its mechanism is not directly relevant to CETP inhibition (I think), it does illustrate how little we know about this area. Merck's anacetrapib is one of the ugliest-looking drug candidates I've ever seen (ten fluorines, three aryl rings, no hydrogen bond donors in sight), and Lilly's compound is only slightly more appealing.

But Merck finds itself having to bet a large part of the company's future in this area. Lilly, for its part, is betting similarly, and most of the rest of their future is being plunked down on Alzheimer's. And these two therapeutic areas have a lot in common: they're both huge markets that require huge clinical trials and rest on tricky fundamental biology. The huge market part makes sense; that's the only way that you could justify the amount of development needed to get a compound through. But the rest of the setup is worth some thought.

Is this what Big Pharma has come to, then? Placing larger and larger bets in hopes of a payoff that will make it all work out? If this were roulette, I'd have no trouble diagnosing someone who was using a Martingale betting system. There are a few differences, although I'm not sure how (or if) they cancel out For one thing, the Martingale gambler is putting down larger and larger amounts of money in an attempt to win the same small payout (the sum of the initial bet!) Pharma is at least chasing a larger jackpot. But the second difference is that the house advantage at roulette is a fixed 5.26% (at least in the US), which is ruinous, but is at least a known quantity.

But mentioning "known quantities" brings up a third difference. The rules of casino games don't change (unless an Ed Thorp shows up, which was a one-time situation). The odds of drug discovery are subject to continuous change as we acquire more knowledge; it's more like the Monty Hall Paradox. The question is, have the odds changed enough in CETP (or HDL-raising therapies in general) or Alzheimer's to make this a reasonable wager?

For the former, well, maybe. There are theories about what went wrong with torcetrapib (a slight raising of blood pressure being foremost, last I heard), and Merck's compound seems to be dodging those. Roche's failure with dacetrapib is worrisome, though, since the official reason there was sheer lack of efficacy in the clinic. And it's clear that there's a lot about HDL and LDL that we don't understand, both their underlying biology and their effects on human health when they're altered. So (to put things in terms of the Monty Hall problem), a tiny door has been opened a crack, and we may have caught a glimpse of some goat hair. But it could have been a throw rug, or a gorilla; it's hard to say.

What about Alzheimer's? I'm not even sure if we're learned as much as we have with CETP. The immunological therapies have been hard to draw conclusions from, because hey, it's the immune system. Every antibody is different, and can do different things. But the mechanistic implications of what we've seen so far are not that encouraging, unless, of course, you're giving interviews as an executive of Eli Lilly. The small-molecule side of the business is a bit easier to interpret; it's an unrelieved string of failures, one crater after another. We've learned a lot about Alzheimer's therapies, but what we've mostly learned is that nothing we've tried has worked much. In Monty Hall terms, the door has stayed shut (or perhaps has opened every so often to provide a terrifying view of the Void). At any rate, the flow of actionable goat-delivered information has been sparse.

Overall, then, I wonder if we really are at the go-for-the-biggest-markets-and-hope-for-the-best stage of research. The big companies are the ones with enough resources to tackle the big diseases; that's one reason we see them there. But the other reason is that the big diseases are the only things that the big companies think can rescue them.

Comments (4) + TrackBacks (0) | Category: Alzheimer's Disease | Cardiovascular Disease | Clinical Trials | Drug Development | Drug Industry History

January 24, 2013

Daniel Vasella Steps Down at Novartis

Email This Entry

Posted by Derek

So Daniel Vasella, longtime chairman of Novartis, has announced that he's stepping down. (He'll be replaced by Joerg Reinhardt, ex-Bayer, who was at Novartis before that). Vasella's had a long run. People on the discovery side of the business will remember him especially for the decision to base the company's research in Cambridge, which has led to (or at the very least accelerated the process of) many of the other big companies putting up sites there as well. Novartis is one of the most successful large drug companies in the world, avoiding the ferocious patent expiration woes of Lilly and AstraZeneca, and avoiding the gigantic merger disruptions of many others.

That last part, though, is perhaps an accident. Novartis did buy a good-sized stake in Roche at one point, and has apparently made, in vain, several overtures over the years to the holders of Roche's voting shares (many of whom are named "Hoffman-LaRoche" and live in very nice parts of Switzerland). And Vasella did oversee the 1996 merger between Sandoz and Ciba-Geigy that created Novartis itself, and he wasn't averse to big acquisitions per se, as the 2006 deal to buy Chiron shows.

It's those very deals, though, that have some investors cheering his departure. Reading that article, which is written completely from the investment side of the universe, is quite interesting. Try this out:

“He’s associated with what we can safely say are pretty value-destructive acquisitions,” said Eleanor Taylor-Jolidon, who manages about 400 million Swiss francs at Union Bancaire Privee in Geneva, including Novartis shares. “Everybody’s hoping that there’s going to be a restructuring now. I hope there will be a restructuring.” . . .

. . .“The shares certainly reacted to the news,” Markus Manns, who manages a health-care fund that includes Novartis shares at Union Investment in Frankfurt, said in an interview. “People are hoping Novartis will sell the Roche stake or the vaccines unit and use the money for a share buyback.”

Oh yes indeed, that's what we're all hoping for, isn't it? A nice big share buyback? And a huge restructuring, one that will stir the pot from bottom to top and make everyone wonder if they'll have a job or where it might be? Speed the day!

No, don't. All this illustrates the different world views that people bring to this business. The investors are looking to maximize their returns - as they should - but those of us in research see the route to maximum returns as going through the labs. That's what you'd expect from us, of course, but are we wrong? A drug company is supposed to find and develop drugs, and how else are you to do that? The investment community might answer that differently: a public drug company, they'd say, is like any other public company. It is supposed to produce value for its shareholders. If it can do that by producing drugs, then great, everything's going according to plan - but if there are other more reliable ways to produce that value, then the company should (must, in fact) avail itself of them.

And there's the rub. Most methods of making a profit are more reliable than drug discovery. Our returns on invested capital for internal projects are worrisome. Even when things work, it's a very jumpy, jerky business, full of fits and starts, with everything new immediately turning into a ticking bomb of a wasting asset due to patent expiry. Some investors understand this and are willing to put up with it in the hopes of getting in on something big. Other investors just want the returns to be smoother and more predictable, and are impatient for the companies to do something to make that happen. And others just avoid us entirely.

Comments (18) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

January 23, 2013

Eating A Whole Bunch of Random Compounds

Email This Entry

Posted by Derek

Reader Andy Breuninger, from completely outside the biopharma business, sends along what I think is an interesting question, and one that bears on a number of issues:

A question has been bugging me that I hope you might answer.

My understanding is that a lot of your work comes down to taking a seed molecule and exploring a range of derived molecules using various metrics and tests to estimate how likely they are to be useful drugs.

My question is this: if you took a normal seed molecule and a standard set of modifications, generated a set of derived molecules at random, and ate a reasonable dose of each, what would happen? Would 99% be horribly toxic? Would 99% have no effect? Would their effects be roughly the same or would one give you the hives, another nausea, and a third make your big toe hurt?

His impression of drug discovery is pretty accurate. It very often is just that: taking one or more lead compounds and running variations on them, trying to optimize potency, specificity, blood levels/absorption/clearance, toxicology, and so on. So, what do most of these compounds do in vivo?

My first thought is "Depends on where you start". There are several issues: (1) We tend to have a defined target in mind when we pick a lead compound, or (if it's a phenotypic assay that got us there), we have a defined activity that we've already seen. So things are biased right from the start; we're already looking at a higher chance of biological activity than you'd have by randomly picking something out of a catalog or drawing something on a board.

And the sort of target can make a big difference. There are an awful lot of kinase enzymes, for example, and compounds tend to cross-react with them, at least in the nearby families, unless you take a lot of care to keep that from happening. Compounds for the G-protein coupled biogenic amines receptors tend to do that, too. On the other hand, you have enzymes like the cytochromes and binding sites like the aryl hydrocarbon receptor - these things are evolved to recognize all sorts of structually disparate stuff. So against the right (or wrong!) sort of targets, you could expect to see a wide range of potential side activities, even before hitting the random ones.

(2) Some structural classes have a lot more biological activity than others. A lot of small-molecule drugs, for example, have some sort of basic amine in them. That's an important recognition element for naturally occurring substances, and we've found similar patterns in our own compounds. So something without nitrogens at all, I'd say, has a lower chance of being active in a living organism. (Barry Sharpless seems to agree with this). That's not to say that there aren't plenty of CHO compounds that can do you harm, just that there are proportionally more CHON ones that can.

Past that rough distinction, there are pharmacophores that tend to hit a lot, sometimes to the point that they're better avoided. Others are just the starting points for a lot of interesting and active compounds - piperazines and imidazoles are two cores that come to mind. I'd be willing to bet that a thousand random piperazines would hit more things than a thousand random morpholines (other things being roughly equal, like molecular weight and polarity), and either of them would hit a lot more than a thousand random cyclohexanes.

(3) Properties can make a big difference. The Lipinski Rule-of-Five criteria come in for a lot of bashing around here, but if I were forced to eat a thousand random compounds that fit those cutoffs, versus having the option to eat a thousand random ones that didn't, I sure know which ones I'd dig my spoon into.

And finally, (4): the dose makes the poison. If you go up enough in dose, it's safe to say that you're going to see an in vivo response to almost anything, including plenty of stuff at the supermarket. Similarly, I could almost certainly eat a microgram of any compound we have in our company's files with no ill effect, although I am not motivated to put that idea to the test. Same goes for the time that you're exposed. A lot of compounds are tolerated for single-dose tox but fail at two weeks. Compounds that make it through two weeks don't always make it to six months, and so on.

How closely you look makes the poison, too. We find that out all the time when we do animal studies - a compound that seems to cause no overt effects might be seen, on necropsy, to have affected some internal organs. And one that doesn't seem to have any visible signs on the tissues can still show effects in a full histopathology workup. The same goes for blood work and other analyses; the more you look, the more you'll see. If you get down to gene-chip analysis, looking at expression levels of thousands of proteins, then you'd find that most things at the supermarket would light up. Broccoli, horseradish, grapefruit, garlic and any number of other things would kick a full expression-profiling assay all over the place.

So, back to the question at hand. My thinking is that if you took a typical lead compound and dosed it at a reasonable level, along with a large set of analogs, then you'd probably find that if any of them had overt effects, they would probably have a similar profile (for good or bad) to whatever the most active compound was, just less of it. The others wouldn't be as potent at the target, or wouldn't reach the same blood levels. The chances of finding some noticeable but completely different activity would be lower, but very definitely non-zero, and would be wildly variable depending on the compound class. These effects might well cluster into the usual sorts of reactions that the body has to foreign substances - nausea, dizziness, headache, and the like. Overall, odds are that most of the compounds wouldn't show much, not being potent enough at any given target, or getting high enough blood levels to show something, but that's also highly variable. And if you looked closely enough, you'd probably find that that all did something, at some level.

Just in my own experience, I've seen one compound out of a series of dopamine receptor ligands suddenly turn up as a vasodilator, noticeable because of the "Rudolph the Red-Nosed Rodent" effect (red ears and tail, too). I've also seen compound series where they started crossing the blood-brain barrier more more effectively at some point, which led to a sharp demarcation in the tolerability studies. And I've seen many cases, when we've started looking at broader counterscreens, where the change of one particular functional group completely knocked a compound out of (or into) activity in some side assay. So you can never be sure. . .

Comments (22) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharma 101 | Pharmacokinetics | Toxicology

January 22, 2013

The Theology of Ligand Efficiency

Email This Entry

Posted by Derek

So in my post the other day about halogen bonds, I mentioned my unease at sticking in things like bromine and iodine atoms, because of the molecular weight penalty involved. Now, it's only a penalty if you're thinking in terms of ligand efficiency - potency per size of the molecule. I think that it's a very useful concept - one that was unheard of when I started in the industry, but which has now made a wide impression. The idea is that you should try, as much as possible, to make every part of your molecule worth something. Don't hang a chain off unless you're getting binding energy for it, and don't hang a big group off unless you're getting enough binding energy to make it worthwhile.

But how does one measure "worthwhile", or measure ligand efficiency in general? There are several schools of thought. One uses potency divided by molecular weight - there are different ways to make this come out to some sort of standard number, but that's the key operation. Another way, though, is to use potency divided by number of heavy atoms. These two scales will give you answers that are quite close to each other if you're just working in the upper reaches of the periodic table - there's not much difference between carbon, nitrogen, and oxygen. Sulfur will start throwing things off, as will chlorine But where the scales really give totally different answers, at least in common med-chem practice, is with bromine and iodine atoms. A single bromine (edit: fixed from earlier "iodine") weighs as much as a benzene ring, so the molecular-weight-based calculation takes a torpedo, while the heavy atom count just registers one more of the things.

For that very reason, I've been in the molecular-weight camp. But TeddyZ of Practical Fragments showed up in the comments to the halogen bond post, recommending arguments for the other side. But now that I've checked those out, I'm afraid that I still don't find them very convincing.

That's because the post he's referring to makes the case against simple molecular weight cutoffs alone. I'm fine with that. There's no way that you can slice things up by a few mass units here and there in any meaningful way. But the issue here isn't just molecular weight, it's activity divided by weight, and in all the cases shown, the ligand efficiency for the targets of these compounds would have gone to pieces if the "smaller" analog were picked. From a ligand efficiency standpoint, these examples are straw men.

So I still worry about bromine and iodine. I think that they hurt a compound's properties, and that treating them as "one heavy atom", as if they were nitrogens, ignores that. Now, that halogen bond business can, in some cases, make up for that, but medicinal chemists should realize the tradeoffs they're making, in this case as in all the others. I wouldn't, for example, rule out an iodo compound as a drug candidate, just because it's an iodo compound. But that iodine had better be earning its keep (and probably would be doing so via a halogen bond). It has a lot to earn back, too, considering the possible effects on PK and compound stability. Those would be the first things I would check in detail if my iodo candidate led the list in the other factors, like potency and selectivity. Then I'd get it into tox as soon as possible - I have no feel whatsoever for how iodine-substituted compounds act in whole-animal tox studies, and I'd want to find out in short order. That, in fact, is my reaction to unusual structures of many kinds. Don't rule them out a priori; but get to the posteriori part, where you have data, as quickly as possible.

So, thoughts on heavy atoms? Are there other arguments to make in favor of ligand efficiency calculated that way, or do most people use molecule weight?

Comments (26) + TrackBacks (0) | Category: Drug Assays | Drug Development

January 21, 2013

That Many Compounds in Development? Really?

Email This Entry

Posted by Derek

So PhRMA has a press release out on the state of drug research, but it's a little hard to believe. This part, especially:

The report, developed by the Analysis Group and supported by PhRMA, reveals that more than 5,000 new medicines are in the pipeline globally. Of these medicines in various phases of clinical development, 70 percent are potential first-in-class medicines, which could provide exciting new approaches to treating disease for patients.

This set off discussion on Twitter and elsewhere about how these number could have been arrived at. Here's the report itself (PDF), and looking through it provides a few more details Using figures that show up in the body of the report, that looks like between 2164 compounds in Phase I, 2329 in Phase II, and 833 in Phase III. Of those, by far the greatest number are in oncology, where they have 1265, 1507, and 288 in Phase I, II, and III, respectively. Second is infectious disease (304/289/135), and third is neurology (256/273/74). It's worth noting that "Psychiatry" is a separate category all its own, by the way.

An accompanying report (PDF) gives a few more specific figures. It claims, among other things, 66 medicines currently in clinical trials for Hepatitis C, 61 projects for ALS, and 158 for ovarian cancer. Now, it's good to have the exact numbers broken down. But don't those seem rather high?

Here's the section on how these counts were obtained:

Except where otherwise noted, data were obtained from EvaluatePharma, a proprietary commercial database with coverage of over 4,500 companies and approximately 50,000 marketed and pipeline products (including those on-market, discontinued, and in development), and containing historical data from 1986 onward. Pipeline information is available for each stage of development, defined as: Research Project, Preclinical, Phase I, II, III, Filed, and Approved. EvaluatePharma collects and curates information from publicly available sources and contains drug-related information such as company sponsor and therapy area. The data were downloaded on December 12, 2011.

While our interest is in drugs in development that have the potential to become new treatment options for U.S. patients, it is difficult to identify ex ante which drugs in development may eventually be submitted for FDA approval – development activity is inherently global, although regulatory review, launch, and marketing are market-specific. Because most drugs are intended for marketing in the U.S., the largest drug market in the world, we have not excluded any drugs in clinical development (i.e., in Phases I, II, or III). However, in any counts of drugs currently in regulatory review, we have excluded drugs that were not filed with the FDA.

Unless otherwise noted, the analysis in this report is restricted to new drug applications for medicines that would be reviewed as new molecular entities (NMEs) and to new indications for already approved NMEs. . .

Products are defined as having a unique generic name, such that a single product is counted exactly once (regardless of the number of indications being pursued).

That gives some openings for the higher-than-expected numbers. For one, those databases of company activities always seem to run on the high side, because many companies keep things listed as development compounds when they're really ceased any work on them (or in extreme cases, never even really started work at all). Second, there may be some oddities from other countries in there, where the standards for press releases are even lower. But we can rule out a third possibility, that single compounds are being counted across multiple indications. I think that the first-in-class figures are surely pumped up by the cases where there are several compounds all in development for the same (as yet unrealized) target, though. Finally, I think that there's some shuffling between "compounds" and "projects" taking place, with the latter having even larger figures.

I'm going to see in another post if I can break down any of these numbers further - who know, maybe there are a lot more compounds in development than I think. But my first impression is that these numbers are much higher than I would have guessed. It would be very helpful if someone at PhRMA would release a list of the compounds they've counted from one of these indications, just to give us an idea. Any chance of that?

Comments (21) + TrackBacks (0) | Category: Clinical Trials | Drug Development

January 16, 2013

Drug Discovery With the Most Common Words

Email This Entry

Posted by Derek

I got caught up this morning in a challenge based on this XKCD strip, the famous "Up-Goer Five". If that doesn't ring a bell, have a look - it's an attempt to describe a Saturn V rocket while using only the most common 1000 words in English. You find, when you do that, that some of the words you really want to be able to use are not on the list - in the case of the Saturn V, "rocket" is probably the first obstacle of that sort you run into, thus "Up-Goer".

So I noticed on Twitter that people are trying to describe their own work using the same vocabulary list, and I thought I'd give it a try. (Here's a handy text editor that will tell you when you've stepped off the path). I quickly found that "lab", "chemical", "test", and "medicine" are not on the list, so there was enough of a challenge to keep me entertained. Here, then, is drug discovery and development, using the simple word list:

I find things that people take to get better when they are sick. These are hard to make, and take a lot of time and money. When we have a new idea, most of them don't actually work, because we don't know everything we need to about how people get sick in the first place. It's like trying to fix something huge, in the dark, without a book to help.

So we have to try over and over, and we often get surprised by what happens. We build our new stuff by making its parts bigger or smaller, or we join a new piece to one end, or we change one part out for another to see if it works better. Some of our new things are not strong enough. Others break down too fast or stay in the body too long, and some would do too many other things to the people who take them (and maybe even make them more sick than they were). To try to fix all of these at the same time is not easy, of course. When we think we've found one, it has to get past all of those problems, and then we have to be able to make a lot of it exactly the same way every time so that we can go to the next part.

And that part is where most of the money and time come in. First, we try our best idea out on a small animal to make sure that it works like we think it will. Only after that we can ask people to take it. First people who are not sick try it, just to make sure, then a few sick ones, then a lot of sick ones of many types. Then, if it still works, we take all our numbers and ask if it is all right to let everyone who is sick buy our new stuff, and to let a doctor tell them to take it.

If they say yes, we have to do well with it as fast as we can, which doesn't always work out, either. That's because there can still be a problem even after all that work. Even if there isn't, after some time (more than a year or two) someone else can let these people buy it, too, and for less. While all that is going on, we are back trying to find another new one before this one runs out, and we had better.

Not everyone likes us. Our stuff can be a lot of money for people. It may not work as well as someone wants it to, or they may not like how we talk with their doctor (and they may have a point there). Even so, many people have no idea of what we do, how hard it is, or how long it can take. But no one has got any other way to do it, at least not yet!

There, that's fairly accurate, and it even manages to sound like me in some parts. Pity there's no Latin on the list, though.

Update: here are some more people describing their work, in the comments over at Just Like Cooking. And I should note that someone has already remarked to me that "This is an explanation that even Marcia Angell could understand".

Comments (32) + TrackBacks (0) | Category: Drug Development

December 18, 2012

Lilly's Two-Drugs-a-Year Prediction

Email This Entry

Posted by Derek

Drug research consultant Bernard Munos popped in the comments here the other day and mentioned this story from 2010 in the Indiana Business Journal. That's where we can find Eli Lilly's prediction that they were going to start producing two new drugs per year, starting in 2013. Since that year is nearly upon us, how's that looking?

Not too well. Back in 2010, Lilly's CEO (John Lechleiter) was talking up the company's plans to weather its big patent expirations, including that two-a-year forecast. Since then, the company has had a brutal string of late-stage clinical failures. In addition to the ones in that article, Lilly's had to withdraw Xigris, and results for edivoxetine are mixed. No wonder we're hearing so much about the not-too-impressive Alzheimer's drugs from them.

But, as I said here, what would I have done differently, were I to have had to misfortune of having to run Eli Lilly? I might not have placed such a big bet on Alzheimer's, but I probably would have found equally unprofitable ways to spend the money. (And in the end, the company deserves credit for taking on such an intractable disease - just no one tell Marcia Angell; she doesn't think anyone in the drug industry does any such thing).

About the only thing I'm sure of is that I wouldn't have gone around telling people that we were going to start launching two drugs a year. No one's ever been able to keep to that pace, not even in the best of times, and these sure aren't the best of times. It's tempting to think about telling the investors and the analysts that we're going to work as hard as we can, using our brains as much as we can, and we're going to launch what we're going to launch, when it's darn well ready to be launched. And past that, no predictions, OK? The only problem is, the stock market wouldn't stand for it. Ken Frazier at Merck tried something a bit like this, and it sure didn't seem to last long. Is happy talk what everyone would rather hear?

Comments (14) + TrackBacks (0) | Category: Business and Markets | Drug Development

December 3, 2012

Marcia Angell's Interview: I Just Can't

Email This Entry

Posted by Derek

I have tried to listen to this podcast with Marcia Angell, on drug companies and their research, but I cannot seem to make it all the way through. I start shouting at the screen, at the speakers, at the air itself. In case you're wondering about whether I'm overreacting, at one point she makes the claim that drug companies don't do much innovation, because most of our R&D budget is spent on clinical trials, and "everyone knows how to do a clinical trial". See what I mean?

Angell has many very strongly held opinions on the drug business. But her take on R&D has always seemed profoundly misguided to me. From what I can see, she thinks that identifying a drug target is the key step, and that everything after that is fairly easy, fairly cheap, and very, very profitable. This is not correct. Really, really, not correct. She (and those who share this worldview, such as her co-author) believe that innovation has fallen off in the industry, but that this has happened mostly by choice. Considering the various disastrously expensive failures the industry has gone through while trying to expand into new diseases, new indications, and new targets, I find this line of argument hard to take.

So, I see, does Alex Tabarrok. I very much enjoyed that post; it does some of the objecting for me, and illustrates why I have such a hard time dealing point-by-point with Angell and her ilk. The misconceptions are large, various, and ever-shifting. Her ideas about drug marketing costs, which Tabarrok especially singles out, are a perfect example (and see some of those other links to my old posts, where I make some similar arguments to his).

So no, I don't think that Angell has changed her opinions much. I sure haven't changed mine.

Comments (59) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices | Why Everyone Loves Us

November 30, 2012

A Broadside Against The Way We Do Things Now

Email This Entry

Posted by Derek

There's a paper out in Drug Discovery Today with the title "Is Poor Research the Cause of Declining Productivity in the Drug Industry? After reviewing the literature on phenotypic versus target-based drug discovery, the author (Frank Sams-Dodd) asks (and has asked before):

The consensus of these studies is that drug discovery based on the target-based approach is less likely to result in an approved drug compared to projects based on the physiological- based approach. However, from a theoretical and scientific perspective, the target-based approach appears sound, so why is it not more successful?

He makes the points that the target-based approach has the advantages of (1) seeming more rational and scientific to its practitioners, especially in light of the advances in molecular biology over the last 25 years, and (2) seeming more rational and scientific to the investors:

". . .it presents drug discovery as a rational, systematic process, where the researcher is in charge and where it is possible to screen thousands of compounds every week. It gives the image of industrialisation of applied medical research. By contrast, the physiology-based approach is based on the screening of compounds in often rather complex systems with a low throughput and without a specific theory on how the drugs should act. In a commercial enterprise with investors and share-holders demanding a fast return on investment it is natural that the drug discovery efforts will drift towards the target-based approach, because it is so much easier to explain the process to others and because it is possible to make nice diagrams of the large numbers of compounds being screened.

This is the "Brute Force bias". And he goes on to another key observation: that this industrialization (or apparent industrialization) meant that there were a number of processes that could be (in theory) optimized. Anyone who's been close to a business degree knows how dear process optimization is to the heart of many management theorists, consultants, and so on. And there's something to that, if you're talking about a defined process like, say, assembling pickup trucks or packaging cat litter. This is where your six-sigma folks come in, your Pareto analysis, your Continuous Improvement people, and all the others. All these things are predicated on the idea that there is a Process out there.

See if this might sound familiar to anyone:

". . .the drug dis- covery paradigm used by the pharmaceutical industry changed from a disease-focus to a process-focus, that is, the implementation and organisation of the drug discovery process. This meant that process-arguments became very important, often to the point where they had priority over scientific considerations, and in many companies it became a requirement that projects could conform to this process to be accepted. Therefore, what started as a very sensible approach to drug discovery ended up becoming the requirement that all drug dis- covery programmes had to conform to this approach – independently of whether or not sufficient information was available to select a good target. This led to dogmatic approaches to drug discovery and a culture developed, where new projects must be presented in a certain manner, that is, the target, mode-of-action, tar- get-validation and screening cascade, and where the clinical manifestation of the disease and the biological basis of the disease at systems-level, that is, the entire organism, were deliberately left out of the process, because of its complexity and variability.

But are we asking too much when we declare that our drugs need to work through single defined targets? Beyond that, are we even asking too much when we declare that we need to understand the details of how they work at all? Many of you will have had such thoughts (and they've been expressed around here as well), but they can tend to sound heretical, especially that second one. But that gets to the real issue, the uncomfortable, foot-shuffling, rather-think-about-something-else question: are we trying to understand things, or are we trying to find drugs?

"False dichotomy!", I can hear people shouting. "We're trying to do both! Understanding how things work is the best way to find drugs!" In the abstract, I agree. But given the amount there is to understand, I think we need to be open to pushing ahead with things that look valuable, even if we're not sure why they do what they do. There were, after all, plenty of drugs discovered in just that fashion. A relentless target-based environment, though, keeps you from finding these things at all.

What it does do, though, is provide vast opportunities for keeping everyone busy. And not just "busy" in the sense of working on trivia, either: working out biological mechanisms is very, very hard, and in no area (despite decades of beavering away) can we say we've reached the end and achieved anything like a complete picture. There are plenty of areas that can and will soak up all the time and effort you can throw at them, and yield precious little in the way of drugs at the end of it. But everyone was working hard, doing good science, and doing what looked like the right thing.

This new paper spends quite a bit of time on the mode-of-action question. It makes the point that understanding the MoA is something that we've imposed on drug discovery, not an intrinsic part of it. I've gotten some funny looks over the years when I've told people that there is no FDA requirement for details of a drug's mechanism. I'm sure it helps, but in the end, it's efficacy and safety that carry the day, and both of those are determined empirically: did the people in the clinical trials get better, or worse?

And as for those times when we do have mode-of-action information, well, here are some fighting words for you:

". . .the ‘evidence’ usually involves schematic drawings and flow-diagrams of receptor complexes involving the target. How- ever, it is almost never understood how changes at the receptor or cellular level affect the phy- siology of the organism or interfere with the actual disease process. Also, interactions between components at the receptor level are known to be exceedingly complex, but a simple set of diagrams and arrows are often accepted as validation for the target and its role in disease treatment even though the true interactions are never understood. What this in real life boils down to is that we for almost all drug discovery programmes only have minimal insight into the mode-of-action of a drug and the biological basis of a disease, meaning that our choices are essentially pure guess-work.

I might add at this point that the emphasis on defined targets and mode of action has been so much a part of drug discovery in recent times that it's convinced many outside observers that target ID is really all there is to it. Finding and defining the molecular target is seen as the key step in the whole process; everything past that is just some minor engineering (and marketing, naturally). That fact that this point of view is a load of fertilizer has not slowed it down much.

I think that if one were to extract a key section from this whole paper, though, this one would be a good candidate:

". . .it is not the target-based approach itself that is flawed, but that the focus has shifted from disease to process. This has given the target-based approach a dogmatic status such that the steps of the validation process are often conducted in a highly ritualised manner without proper scientific analysis and questioning whether the target-based approach is optimal for the project in question.

That's one of those "Don't take this in the wrong way, but. . ." statements, which are, naturally, always going to be taken in just that wrong way. But how many people can deny that there's something to it? Almost no one denies that there's something not quite right, with plenty of room for improvement.

What Sams-Dodd has in mind for improvement is a shift towards looking at diseases, rather than targets or mechanisms. For many people, that's going to be one of those "Speak English, man!" moments, because for them, finding targets is looking at diseases. But that's not necessarily so. We would have to turn some things on their heads a bit, though:

In recent years there have been considerable advances in the use of automated processes for cell-culture work, automated imaging systems for in vivo models and complex cellular systems, among others, and these developments are making it increasingly possible to combine the process-strengths of the target-based approach with the disease-focus of the physiology-based approach, but again these technologies must be adapted to the research question, not the other way around.

One big question is whether the investors funding our work will put up with such a change, or with such an environment even if we did establish it. And that gets back to the discussion of Andrew Lo's securitization idea, the talk around here about private versus public financing, and many other topics. Those I'll reserve for another post. . .

Comments (30) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | Who Discovers and Why

November 29, 2012

When Drug Launches Go Bad

Email This Entry

Posted by Derek

For those connoisseurs of things that have gone wrong, here's a list of the worst drug launches of recent years. And there are some rough ones in there, such as Benlysta, Provenge, and (of course) Makena. And from an aesthetic standpoint, it's hard not to think that if you name your drug Krystexxa that you deserve what you get. Read up and try to avoid being part of such a list yourself. . .

Comments (8) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices

Roche Repurposes

Email This Entry

Posted by Derek

Another drug repurposing initiative is underway, this one between Roche and the Broad Institute. The company is providing 300 failed clinical candidates to be run through new assays, in the hopes of finding a use for them.

I hope something falls out of this, because any such compounds will naturally have a substantial edge in further development. They should all have been through toxicity testing, they've had some formulations work done on them, a decent scale-up route has been identified, and so on. And many of these candidates fell out in Phase II, so they've even been in human pharmacokinetics.

On the other hand (there's always another hand), you could also say that this is just another set of 300 plausible-looking compounds, and what does a 300-compound screening set get you? The counterargument to this is that these structures have not only been shown to have good absorption and distribution properties (no small thing!), they've also been shown to bind well to at least one target, which means that they may well be capable of binding well to other similar motifs in other active sites. But the counterargument to that is that now you've removed some of those advantages in the paragraph above, because any hits will now come with selectivity worries, since they come with guaranteed activity against something else.

This means that the best case for any repurposed compound is for its original target to be good for something unanticipated. So that Roche collection of compounds might also be thought of as a collection of failed targets, although I doubt if there are a full 300 of those in there. Short of that, every repurposing attempt is going to come with its own issues. It's not that I think these shouldn't be tried - why not, as long as it doesn't cost too much - but things could quickly get more complicated than they might have seemed. And that's a feeling that any drug discovery researcher will recognize like an old, er, friend.

For more on the trickiness of drug repurposing, see John LaMattina here and here. And the points he raises get to the "as long as it doesn't cost too much" line in the last paragraph. There's opportunity cost involved here, too, of course. When the Broad Institute (or Stanford, or the NIH) screens old pharma candidates for new uses, they're doing what a drug company might do itself, and therefore possibly taking away from work that only they could be doing instead. Now, I think that the Broad (for example) already has a large panel of interesting screens set up, so running the Roche compounds through them couldn't hurt, and might not take that much more time or effort. So why not? But trying to push repurposing too far could end up giving us the worst of both worlds. . .

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

November 28, 2012

Every Tiny Detail

Email This Entry

Posted by Derek

Via Chemjobber, we have here an excellent example of how much detail you have to get into if you're seriously making a drug for the market. When you have to account for every impurity, and come up with procedures that generate the same ones within the same tight limits every time, this is the sort of thing you have to pay attention to: how you dry your compound. And how long. And why. Because if you don't, huge amounts of money (time, lost revenue, regulatory trouble, lawsuits) are waiting. . .

Comments (5) + TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Development

November 19, 2012

The Novartis Pipeline

Email This Entry

Posted by Derek

This would seem to be inviting the wrath of the Drug Development Gods, and man, are they a testy bunch: "Novartis could produce 14 or more new big-selling 'blockbuster' drugs within five years . . ."

I'll certainly wish them luck on that, and it certainly seems true that Novartis research has been productive. But think back - how many press releases have you seen over the years where Drug Company A predicts X number of big product launches in the next Y years? And how many of those schedules have ever quite worked out? The most egregious examples of this take the form of claiming that your new strategy/platform/native genius/good looks have now allowed you to deliver these things on some sort of regular schedule. When you hear someone talking about how even though they haven't been able to do anything like it in the past, they're going to start unleashing a great new drug product launch every year (or every 18 months, what have you) from here on out, run.

Now, Novartis isn't talking like this, and they have a much better chance of delivering on this than most, but still. Might it not be better just to creep up on people with all those great new products in hand, rather than risk disappointment?

Comments (10) + TrackBacks (0) | Category: Business and Markets | Drug Development

November 13, 2012

Nassim Taleb on Scientific Discovery

Email This Entry

Posted by Derek

There's an interesting article posted on Nassim Taleb's web site, titled "Understanding is a Poor Substitute for Convexity (Antifragility)". It was recommended to me by a friend, and I've been reading it over for its thoughts on how we do drug research. (This would appear to be an excerpt from, or summary of, some of the arguments in the new book Antifragile: Things That Gain from Disorder, which is coming out later this month).

Taleb, of course, is the author of The Black Swan and Fooled by Randomness, which (along with his opinions about the recent financial crises) have made him quite famous.

So this latest article is certainly worth reading, although much of it reads like the title, that is, written in fluent and magisterial Talebian. This blog post is being written partly for my own benefit, so that I make sure to go to the trouble of a translation into my own language and style. I've got my idiosyncracies, for sure, but I can at least understand my own stuff. (And, to be honest, a number of my blog posts are written in that spirit, of explaining things to myself in the process of explaining them to others).

Taleb starts off by comparing two different narratives of scientific discovery: luck versus planning. Any number of works contrast those two. I'd say that the classic examples of each (although Taleb doesn't reference them in this way) are the discovery of penicillin and the Manhattan Project. Not that I agree with either of those categorizations - Alexander Fleming, as it turns out, was an excellent microbiologist, very skilled and observant, and he always checked old culture dishes before throwing them out just to see what might turn up. And, it has to be added, he knew what something interesting might look like when he saw it, a clear example of Pasteur's quote about fortune and the prepared mind. On the other hand, the Manhattan Project was a tremendous feat of applied engineering, rather than scientific discovery per se. The moon landings, often used as a similar example, are also the exact sort of thing. The underlying principles of nuclear fission had been worked out; the question was how to purify uranium isotopes to the degree needed, and then how to bring a mass of the stuff together quickly and cleanly enough. These processes needed a tremendous amount of work (it wasn't obvious how to do either one, and multiple approaches were tried under pressure of time), but the laws of (say) gaseous diffusion were already known.

But when you look over the history of science, you see many more examples of fortunate discoveries than you see of planned ones. Here's Taleb:

The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics —even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.

"Opaque and nonlinear" just about sums up a lot of drug discovery and development, let me tell you. But Taleb goes on to say that "trial and error" is a misleading phrase, because it tends to make the two sound equivalent. What's needed is an asymmetry: the errors need to be as painless as possible, compared to the payoffs of the successes. The mathematical equivalent of this property is called convexity; a nonlinear convex function is one with larger gains than losses. (If they're equal, the function is linear). In research, this is what allows us to "harvest randomness", as the article puts it.

An example of such a process is biological evolution: most mutations are harmless and silent. Even the harmful ones will generally just kill off the one organism with the misfortune to bear them. But a successful mutation, one that enhances survival and reproduction, can spread widely. The payoff is much larger than the downside, and the mutations themselves come along for free, since some looseness is built into the replication process. It's a perfect situation for blind tinkering to pay off: the winners take over, and the losers disappear.

Taleb goes on to say that "optionality" is another key part of the process. We're under no obligation to follow up on any particular experiment; we can pick the one that worked best and toss the rest. This has its own complications, since we have our own biases and errors of judgment to contend with, as opposed to the straightforward questions of evolution ("Did you survive? Did you breed?"). But overall, it's an important advantage.

The article then introduces the "convexity bias", which is defined as the difference between a system with equal benefit and harm for trial and error (linear) and one where the upsides are higher (nonlinear). The greater the split between those two, the greater the convexity bias, and the more volatile the environment, the great the bias is as well. This is where Taleb introduces another term, "antifragile", for phenomena that have this convexity bias, because they're equipped to actually gain from disorder and volatility. (His background in financial options is apparent here). What I think of at this point is Maxwell's demon, extracting useful work from randomness by making decisions about which molecules to let through his gate. We scientists are, in this way of thinking, members of the same trade union as Maxwell's busy creature, since we're watching the chaos of experimental trials and natural phenomena and letting pass the results we find useful. (I think Taleb would enjoy that analogy). The demon is, in fact, optionality manifested and running around on two tiny legs.

Meanwhile, a more teleological (that is, aimed and coherent) approach is damaged under these same conditions. Uncertainty and randomness mess up the timelines and complicate the decision trees, and it just gets worse and worse as things go on. It is, by these terms, fragile.

Taleb ends up with seven rules that he suggests can guide decision making under these conditions. I'll add my own comments to these in the context of drug research.

(1) Under some conditions, you'd do better to improve the payoff ratio than to try to increase your knowledge about what you're looking for. One way to do that is to lower the cost-per-experiment, so that a relatively fixed payoff then is larger in comparison. The drug industry has realized this, naturally: our payoffs are (in most cases) somewhat out of our control, although the marketing department tries as hard as possible. But our costs per experiment range from "not cheap" to "potentially catastrophic" as you go from early research to Phase III. Everyone's been trying to bring down the costs of later-stage R&D for just these reasons.

(2) A corollary is that you're better off with as many trials as possible. Research payoffs, as Taleb points out, are very nonlinear indeed, with occasional huge winners accounting for a disproportionate share of the pool. If we can't predict these - and we can't - we need to make our nets as wide as possible. This one, too, is appreciated in the drug business, but it's a constant struggle on some scales. In the wide view, this is why the startup culture here in the US is so important, because it means that a wider variety of ideas are being tried out. And it's also, in my view, why so much M&A activity has been harmful to the intellectual ecosystem of our business - different approaches have been swallowed up, and they they disappear as companies decide, internally, on the winners.

And inside an individual company, portfolio management of this kind is appreciated, but there's a limit to how many projects you can keep going. Spread yourself too thin, and nothing will really have a chance of working. Staying close to that line - enough projects to pick up something, but not so many as to starve them all - is a full-time job.

(3) You need to keep your "optionality" as strong as possible over as long a time as possible - that is, you need to be able to hit a reset button and try something else. Taleb says that plans ". . .need to stay flexible with frequent ways out, and counter to intuition, be very short term, in order to properly capture the long term. Mathematically, five sequential one-year options are vastly more valuable than a single five-year option." I might add, though, that they're usually priced accordingly (and as Taleb himself well knows, looking for those moments when they're not priced quite correctly is another full-time job).

(4) This one is called "Nonnarrative Research", which means the practice of investing with people who have a history of being able to do this sort of thing, regardless of their specific plans. And "this sort of thing" generally means a lot of that third recommendation above, being able to switch plans quickly and opportunistically. The history of many startup companies will show that their eventual success often didn't bear as much relation to their initial business plan as you might think, which means that "sticking to a plan", as a standalone virtue, is overrated.

At any rate, the recommendation here is not to buy into the story just because it's a good story. I might draw the connection here with target-based drug discovery, which is all about good stories.

(5) Theory comes out of practice, rather than practice coming out of theory. Ex post facto histories, Taleb says, often work the story around to something that looks more sensible, but his claim is that in many fields, "tinkering" has led to more breakthroughs than attempts to lay down new theory. His reference is to this book, which I haven't read, but is now on my list.

(6) There's no built-in payoff for complexity (or for making things complex). "In academia," though, he says, "there is". Don't, in other words, be afraid of what look like simple technologies or innovations. They may, in fact, be valuable, but have been ignored because of this bias towards the trickier-looking stuff. What this reminds me of is what Philip Larkin said he learned by reading Thomas Hardy: never be afraid of the obvious.

(7) Don't be afraid of negative results, or paying for them. The whole idea of optionality is finding out what doesn't work, and ideally finding that out in great big swaths, so we can narrow down to where the things that actually work might be hiding. Finding new ways to generate negative results quickly and more cheaply, which can means new ways to recognize them earlier, is very valuable indeed.

Taleb finishes off by saying that people have criticized such proposals as the equivalent of buying lottery tickets. But lottery tickets, he notes, are terribly overpriced, because people are willing to overpay for a shot at a big payoff on long odds. But lotteries have a fixed upper bound, whereas R&D's upper bound is completely unknown. And Taleb gets back to his financial-crisis background by pointing out that the history of banking and finance points out the folly of betting against long shots ("What are the odds of this strategy suddenly going wrong?"), and that in this sense, research is a form of reverse banking.

Well, those of you out there who've heard the talk I've been giving in various venues (and in slightly different versions) the last few months may recognize that point, because I have a slide that basically says that drug research is the inverse of Wall Street. In finance, you try to lay off risk, hedge against it, amortize it, and go for the steady payoff strategies that (nonetheless) once in a while blow up spectacularly and terribly. Whereas in drug research, risk is the entire point of our business (a fact that makes some of the business-trained people very uncomfortable). We fail most of the time, but once in a while have a spectacular result in a good direction. Wall Street goes short risk; we have to go long.

I've been meaning to get my talk up on YouTube or the like; and this should force me to finally get that done. Perhaps this weekend, or over the Thanksgiving break, I can put it together. I think it fits in well with what Taleb has to say.

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

October 30, 2012

JQ1: Giving Up a Fortune?

Email This Entry

Posted by Derek

The Atlantic is out with a list of "Brave Thinkers", and one of them is Jay Bradner at Harvard Medical School. He's on there for JQ1, a small-molecule bromodomain ligand that was reported in 2010. (I note, in passing, that once again nomenclature has come to the opposite of our rescue, since bromodomains have absolutely nothing to do with bromine, in contrast to 98% of all the other words that begin with "bromo-")

These sorts of compounds have been very much in the news recently, as part of the whole multiyear surge in epigenetic research. Drug companies, naturally, are looking to the epigenetic targets that might be amenable to small-molecule intervention, and bromodomains seem to qualify (well, some of them do, anyway).

At any rate, JQ1 is a perfectly reasonable probe compound for bromodomain studies, but it got a lot of press a couple of months ago as a potential male contraceptive. I found all that wildly premature - a compound like this one surely sets off all kinds of effects in vivo, and disruption of spermatogenesis is only one of them. Note (PDF) that it hits a variety of bromodomain subtypes, and we only have the foggiest notion of what most of these are doing in real living systems.

The Atlantic, for its part, makes much of Bradner's publishing JQ1 instead of patenting it:

The monopoly on developing the molecule that Bradner walked away from would likely have been worth a fortune (last year, the median value for U.S.-based biotech companies was $370 million). Now four companies are building on his discovery—which delights Bradner, who this year released four new molecules. “For years, drug discovery has been a dark art performed behind closed doors with the shades pulled,” he says. “I would be greatly satisfied if the example of this research contributed to a change in the culture of drug discovery.”

But as Chemjobber rightly says, the idea that Bradner walked away from a fortune is ridiculous. JQ1 is not a drug, nor is it ever likely to become a drug. It has inspired research programs to find drugs, but they likely won't look much (or anything) like JQ1, and they'll do different things (for one, they'll almost surely be more selective). In fact, chasing after that sort of selectivity is one of the things that Bradner's own research group appears to be doing - and quite rightly - while his employer (Dana-Farber) is filing patent applications on JQ1 derivatives. Quite rightly.

Patents work differently in small-molecule drug research than most people seem to think. (You can argue, in fact, that it's one of the areas where the system works most like it was designed to, as opposed to often-abominable patent efforts in software, interface design, business methods, and the like). People who've never had to work with them have ideas about patents being dark, hidden boxes of secrets, but one of the key things about a patent is disclosure. You have to tell people what your invention is, what it's good for, and how to replicate it, or you don't have a valid patent.

Admittedly, there are patent applications that do not make all of these steps easy - a case in point would be the ones from Exelixis - I wrote here about my onetime attempts to figure out the structures of some of their lead compounds from their patent filings. Not long ago I had a chance to speak with someone who was there at the time, and he was happy to hear that I'd come up short, saying that this had been exactly the plan). But at the same time, all their molecules were in there, along with all the details of how to make them. And the claims of the patents detailed exactly why they were interested in such compounds, and what they planned to do with them as drugs. You could learn a lot about what Exelixis was up to; it was just that finding out the exact structure of the clinical candidate that was tricky. A patent application on JQ1 would have actually ended up disclosing most (or all) of what the publication did.

I'm not criticizing Prof. Bradner and his research group here. He's been doing excellent work in this area, and his papers are a pleasure to read. But the idea that Harvard Medical School and Dana-Farber would walk away from a pharma fortune is laughable.

Comments (33) + TrackBacks (0) | Category: Cancer | Chemical Biology | Drug Development | Patents and IP

October 17, 2012

Zafgen's Epoxide Adventure

Email This Entry

Posted by Derek

Zafgen is a startup in the Boston area that's working on a novel weight-loss drug called beloranib. Their initial idea was that they were inhibiting angiogenesis in adipose tissue, through inhibition of methionine aminopeptidase-2. But closer study showed that while the compound was indeed causing significant weight loss in animal models, it wasn't through that mechanism. Blood vessel formation wasn't affected, but the current thinking is that Met-AP2 inhibition is affecting fatty acid synthesis and causing more usage of lipid stores.

But when they say "novel", they do mean it. Behold one of the more unlikely-looking drugs to make it through Phase I:
Beloranib.png
Natural-product experts in the audience might experience a flash of recognition. That's a derivative of fumagillin, a compound from Aspergillus that's been kicking around for many years now. And its structure brings up a larger point about reactive groups in drug molecules, the kind that form covalent bonds with their targets.

I wrote about covalent drugs here a few years ago, and the entire concept has been making a comeback. (If anyone was unsure about that, Celgene's purchase of Avila was the convincer). Those links address the usual pros and cons of the idea: on the plus side, slow off rates are often beneficial in drug mechanisms, and you don't get much slower than covalency. On the minus side, you have to worry about selectivity even more, since you really don't want to go labeling across the living proteome. You have the mechanisms of the off-target proteins to worry about once you shut them down, and you also have the ever-present fear of setting off an immune response if the tagged protein ends up looking sufficiently alien.

I'm not aware of any published mechanistic studies of beloranib, but it is surely another one of this class, with those epoxides. (Looks like it's thought to go after a histidine residue, by analogy to fumagillin's activity against the same enzyme). But here's another thing to take in: epoxides are not as bad as most people think they are. We organic chemists see them and think that they're just vibrating with reactivity, but as electrophiles, they're not as hot as they look.

That's been demonstrated by several papers from the Cravatt labs at Scripps. (He still is at Scripps, right? You need a scorecard these days). In this work, they showed that some simple epoxides, when exposed to entire proteomes, really didn't label many targets at all compared to the other electrophiles on their list. And here, in an earlier paper, they looked at fumagillin-inspired spiroexpoxide probes specifically, and found an inhibitor of phosphoglycerate mutase 1. But a follow-up SAR study of that structure showed that it was very picky indeed - you had to have everything lined up right for the epoxide to react, and very close analogs had no effect. Taken together, the strong implication is that epoxides can be quite selective, and thus can be drugs. You still want to be careful, because the toxicology literature is still rather vocal on the subject, but if you're in the less reactive/more structurally complex/more selective part of that compound space, you might be OK. We'll see if Zafgen is.

Comments (21) + TrackBacks (0) | Category: Chemical Biology | Diabetes and Obesity | Drug Development

October 11, 2012

IGFR Therapies Wipe Out. And They're Not Alone.

Email This Entry

Posted by Derek

Here's a look at something that doesn't make many headlines: the apparent failure of an entire class of potential drugs. The insulin-like growth factor 1 receptor (IGF-1R) has been targeted for years now, from a number of different angles. There have been several antibodies tried against it, and companies have also tried small molecule approaches such as inhibiting the associated receptor kinase. (I was on such a project myself a few years back). So far, nothing has worked out.

And as that review shows, this was a very reasonable-sounding idea. Other growth factor receptors have been successful cancer targets (notably EGFR), and there was evidence of IGFR over-expression in several widespread cancer types (and evidence from mouse models that inhibiting it would have the desired effect). The rationale here was as solid as anything we have, but reality has had other ideas:

It is hardly surprising that even some of the field's pioneers are now pessimistic. “In the case of IGF-1R, one can protest that proper studies have not yet been carried out,” writes Renato Baserga, from the department of Cancer Biology, Thomas Jefferson University in Philadelphia. (J. Cell. Physiol., doi:10.1002/jcp.24217). A pioneer in IGF-1 research, Baserga goes on to list some avenues that may still be promising, such as targeting the receptor to prevent metastases in colorectal cancer patients. But in the end, he surmises: “These excuses are poor excuses, [they are] an attempt to reinvigorate a procedure that has failed.” Saltz agrees. “This may be the end of the story,” he says. “At one point, there were more than ten companies developing these drugs; now this may be the last one that gets put on the shelf.”

But, except for articles like these in journals like Nature Biotechnology, or mentions on web sites like this one, no one really hears about this sort of thing. We've talked about this phenomenon before; there's a substantial list of drug targets that looked very promising, got a lot of attention for years, but never delivered any sort of drug at all. Negative results don't make for much of a headline in the popular press, especially when the story develops over a multi-year period.

I think it would be worthwhile for people to hear about this, though. I once talked with someone who was quite anxious about an upcoming plane trip; they were worried on safety grounds. It occurred to me that if there were a small speaker on this person's desk announcing all the flights that had landed safely around the country (or around the world), that a few days of that might actually have an effect. Hundreds, thousands of announcements, over and over: "Flight XXX has landed safely in Omaha. Flight YYY has landed safely in Seoul. Flight ZZZ has landed safely in Amsterdam. . ." Such a speaker system wouldn't shut up for long suring any given day, that's for sure, and it would emphasize the sheer volume of successful air travel that takes place each day, over and over.

On the other hand, almost all drug research programs, or never even make it off the ground in the first place. In this field, actually getting a plane together, getting it into the air, and guiding it to a landing at the FDA only happens once in a rather long while, which is why there are plenty of people out there in early research who've never worked on anything that's made it to market. A list of all the programs that failed would be instructive, and might get across how difficult finding a drug really is, but no one's going to be able to put one of those together. Companies don't even announce the vast majority of their preclinical failures; they're below everyone else's limit of detection. I can tell you for sure that most of the non-delivering programs I've worked on have never seen daylight of any sort. They just quietly disappeared.

Comments (11) + TrackBacks (0) | Category: Cancer | Drug Development | Drug Industry History

September 24, 2012

The One-Stop CRO

Email This Entry

Posted by Derek

C&E News has a good articlehttp://cen.acs.org/articles/90/i39/One-Stop-Shops-Emerge-Drug.html out on the so-called "one-stop shop" contract research organizations in pharma - these are the Covances and WuXis of the world, who can take on all sorts of preclinical (and clinical) jobs for you under one umbrella.

The old debate over one-stop shopping has, however, become more nuanced in the current pharmaceutical industry environment. Service firms and their customers agree that much of the decision making comes down to where to outsource workhorse chemistry and where to outsource frontline science. Sources agree that a market still exists for boutique CROs that focus on one node along the discovery/development continuum. And some drug firms say they are working with more than one full-service vendor, negating the supposed advantage of one-stop shopping.


There's more of this sort of thing around than ever, of course, but the merits of the whole idea are still being debated. There's no questions that these companies can extend the reach of an organization that doesn't have all these specialities itself, but that doesn't mean that you can't mess things up, either.

Not every drug firm is scaling down internal research. Sonia Pawlak, manager of strategic outsourcing in chemical development at Gilead Sciences, says drug companies with fully developed R&D operations will likely not see much advantage in working with a one-stop-shop contractor. . .Geographical proximity to a supplier is important to Gilead, Pawlak adds, questioning whether linking research and manurfacturing assets across different continents saves the customer time.

I'm used to looking at these companies from the buying end. When you consider the whole CRO world from the other direction, though, you see a vision of de-risked pharma. These people are going to get paid, whether the preclinical program works out or not, whether the clinical trials work or not, whether the eventual drug is approved or not. It's a contract business.

But they're also never going to get paid more than what is in that contract - they will share in no windfalls, get pieces of no blockbusters. So eventually, you end up with two halves of the whole drug R&D business: a drug company that does little or no outsourcing (along with the small R&D discovery companies that outsource everything they can) are in the part that takes the big risks and goes for the big victories, while the CROs are the part that takes on (comparatively) no risk in exchange for a smaller guaranteed payout.

Comments (11) + TrackBacks (0) | Category: Drug Development

September 21, 2012

Transcelerate: What Is It, Exactly?

Email This Entry

Posted by Derek

A list of big pharma companies have announced that they're setting up a joint venture, Trancelerate, to try to address common precompetitive drug development problems. But that covers a broad area, and this collaboration is more narrowly focused:

Members of TransCelerate have identified clinical study execution as the initiative's initial area of focus. Five projects have been selected by the group for funding and development, including: development of a shared user interface for investigator site portals; mutual recognition of study site qualification and training; development of risk-based site monitoring approach and standards; development of clinical data standards; and establishment of a comparator drug supply model.

Now, that paragraph is hard to get through, I have to say. I understand what they're getting at, and these are all worthy objectives, but I think it could be boiled down to saying "We're going to try not to duplicate each other's work so much when we're setting up clinical trials and finding places to run them. They cost so much already that it's silly for us all to spend money doing the same things that have to be done every time." And other than this, details are few. The initiative will be headquartered in Philadelphia, but that seems to be about it so far.

But this it won't get at the fundamental problems in drug research. Our clinical failure rate of around 90% has very little to do with the factors that Transcelerate is addressing - what they're trying to do is make that failure rate less of a financial burden. That's certainly worth taking on, in lieu of figuring out why our drugs crash and burn so often. That one is a much tougher problem, easily proven by the fact that there are billions of dollars waiting to be picked up for even partial solutions to it.

Comments (18) + TrackBacks (0) | Category: Clinical Trials | Drug Development

September 18, 2012

Going After the Big Cyclic Ones

Email This Entry

Posted by Derek

I wrote last year about macrocyclic compounds and their potential as drugs. Now BioCentury has a review of the companies working in this area, and there are more of them than I thought. Ensemble and Aileron are two that come to mind (if you count "stapled peptides" as macrocycles, and I think they should). But there are also Bicycle, Encycle, Lanthio, Oncodesign, Pepscan, PeptiDream, Polyphor, Protagonist, and Transzyme. These companies have a lot of different approaches. Many of them (but not all) are using cyclic peptides, but there are different ways of linking these, different sorts of amino acids you can use in them, and so on. And the non-peptidic approaches have an even wider variety. So I've no doubt that there's room in this area for all these companies - but I also have no doubt that not all these approaches are going to work equally well. And we're just barely getting to the outer fringes of sorting that out:

While much of the excitement over macrocycles is due to their potential to disrupt intracellular protein-protein interactions, every currently disclosed lead program in the space targets an extracellular protein. This reality reflects the challenge of developing a potent and cell-penetrant macrocyclic compound.

Tranzyme and Polyphor are the only companies with macrocyclic compounds in the clinic. Polyphor’s lead compound is POL6326, a conformationally constrained peptide that antagonizes CXC chemokine receptor 4 (CXCR4; NPY3R). It is in Phase II testing to treat multiple myeloma (MM) using autologous transplantation of hematopoietic stem cells.

Tranzyme’s lead compound is TZP-102, an orally administered ghrelin receptor agonist in Phase IIb testing to treat diabetic gastroparesis.

Two weeks ago, Aileron announced it hopes to start clinical development of its lead internally developed program in 2013. The compound, ALRN-5281, targets the growth hormone-releasing hormone (GHRH) receptor.

Early days, then. It's understandable that the first attempts in this area will come via extracellular-acting, iv-administered agents - those are the lowest bars to clear for a new technology. But if this area is going to live up to its potential, it'll have to go much further along than that. We're going to have to learn a lot more about cellular permeability, which is a very large side effect (a "positive externality", as the economists say) of pushing the frontiers back like this: you figure these things out because you have to.

Comments (9) + TrackBacks (0) | Category: Drug Development | Pharmacokinetics

September 10, 2012

Geron, And The Risk of Cancer Therapies

Email This Entry

Posted by Derek

Geron's telomerase inhibitor compound, imetalstat, showed a lot of interesting results in vitro, and has been in Phase II trials all this year. Until now. The company announced this morning that the interim results of their breast-cancer trial are so unpromising that it's been halted, and that lung cancer data aren't looking good, either. The company's stock has been cratering in premarket trading, and this stock analyst will now have some thinking to do, as will the people who followed his advice last week.

I'm sorry to see the first telomerase inhibitor perform so poorly; we need all the mechanisms we can get in oncology. And this is terrible news for Geron, since they'd put all their money down on this therapeutic area. But this is drug discovery; this is research: a lot of good, sensible, promising ideas just don't work.

That phrase comes to mind after reading this article from the Telegraph about some Swedish research into cancer therapy. It's written in a breathless style - here, see for yourself:

Yet as things stand, Ad5[CgA-E1A-miR122]PTD – to give it the full gush of its most up-to-date scientific name – is never going to be tested to see if it might also save humans. Since 2010 it has been kept in a bedsit-sized mini freezer in a busy lobby outside Prof Essand's office, gathering frost. ('Would you like to see?' He raises his laptop computer and turns, so its camera picks out a table-top Electrolux next to the lab's main corridor.)
Two hundred metres away is the Uppsala University Hospital, a European Centre of Excellence in Neuroendocrine Tumours. Patients fly in from all over the world to be seen here, especially from America, where treatment for certain types of cancer lags five years behind Europe. Yet even when these sufferers have nothing else to hope for, have only months left to live, wave platinum credit cards and are prepared to sign papers agreeing to try anything, to hell with the side-effects, the oncologists are not permitted – would find themselves behind bars if they tried – to race down the corridors and snatch the solution out of Prof Essand's freezer.

(By the way, does anyone have anything to substantiate that "five years behind Europe" claim? I don't.) To be sure, Prof. Essand tries to make plain to the reporter (Alexander Masters) that this viral therapy has only been tried in animals, that a lot of things work in animals that don't work in man, and so on. But given Masters' attitude towards medical research, there's only so much that you can do:

. . .Quacks provide a very useful service to medical tyros such as myself, because they read all the best journals the day they appear and by the end of the week have turned the results into potions and tinctures. It's like Tommy Lee Jones in Men in Black reading the National Enquirer to find out what aliens are up to, because that's the only paper trashy enough to print the truth. Keep an eye on what the quacks are saying, and you have an idea of what might be promising at the Wild West frontier of medicine. . .

I have to say, in my experience, that this is completely wrong. Keep an eye on what the quacks are saying, and you have an idea of what might have been popular in 1932. Or 1954. Quacks seize onto an idea and never, ever, let it go, despite any and all evidence, so quackery is an interminable museum of ancient junk. New junk is added all the time, though, one has to admit. You might get some cutting-edge science, if your idea of cutting-edge is an advertisement in one of those SkyMall catalogs you get on airplanes. A string of trendy buzzwords super-glued together does not tell you where science is heading.

But Masters means well with this piece. He wants to see Essend's therapy tried out in the clinic, and he wants to help raise money to do that (see the end of the article, which shows how to donate to a fund at Uppsala). I'm fine with that - as far as I can tell, longer shots than this one get into the clinic, so why not? But I'd warn people that their money, as with the rest of the money we put into this business, is very much at risk. If crowdsourcing can get some ideas a toehold in the clinical world, I'm all for it, but it would be a good thing in general if people realized the odds. It would also be a good idea if more people realized how much money would be needed later on, if things start to look promising. No one's going to crowdsource a Phase III trial, I think. . . .

Comments (12) + TrackBacks (0) | Category: Cancer | Clinical Trials | Drug Development

September 6, 2012

Accelerated Approval And Its Discontents

Email This Entry

Posted by Derek

This may sound a little odd coming from someone in the drug industry, but I have a lot of sympathy for the FDA. I'm not saying that I always agree with them, or that I think that they're doing exactly what we need them to do all the time. But I would hate to be the person that would have to decide how they should do things differently. And I think that no matter what, the agency is going to have a lot of people with reasons to complain.

These thoughts are prompted by this article in JAMA on whether or not drug safety is being compromised by the growing number of "Priority Review" drug approvals. There are three examples set out in detail: Caprelsa (vandetanib) for thyroid cancer, Gilenya (fingolimod) for multiple sclerosis, and the anticoagulant Pradaxa (dabigatran). In each of these accelerated cases, safety has turned out to be more of a concern than some people expected, and the authors of this paper are asking if the benefits have been worth the risks.

Pharmalot has a good summary of the paper, along with a reply from the FDA. Their position is that various forms of accelerated approval have been around for quite a few years now, and that the agency is committed to post-approval monitoring in these cases. What they don't say - but it is, I think, true - is that there is no way to have accelerated approvals without occasional compromises in drug safety. Can't be done. You have to try to balance these things on a drug-by-drug basis: how much the new medication might benefit people without other good options, versus how many people it might hurt instead. And those are very hard calls, which are made with less data than you would have under non-accelerated conditions. If these three examples are indeed problematic drugs that made it through the system, no one should be surprised at all. Given the number of accelerated reviews over the years, there have to be some like this. In fact, this goes to show you that the accelerated review process is not, in fact, a sham. If everything that passed through it turned out to be just as clean as things that went through the normal approval process, that would be convincing evidence that the whole thing was just window dressing.

If that's true - and as I said, I certainly believe it is - then the question is "Should there be such a thing as accelerated approval at all?" If you decide that the answer to that is "Yes", then the follow-up is "Is the risk-reward slider set to the right place, or are we letting a few too many things through?" This is the point the authors are making, I'd say, that the answer to that question is "Yes", and we need to move the settings back a bit. But here comes an even trickier question: if you do that, how far back do you go before the whole accelerated approval process is not worth the effort any more? (If you try to make it so that nothing problematic makes it through at all, you've certainly crossed into that territory, to my way of thinking). So if three recent examples like these represent an unacceptable number (and it may be), what is acceptable? Two? One? Those numbers, but over a longer period of time?

And if so, how are you going to do that without tugging on the other end of the process, helping patients who are waiting for new medications? No, these are very, very hard questions, and no matter how you answer them, someone will be angry with you. I have, as I say, a lot of sympathy for the FDA.

Comments (7) + TrackBacks (0) | Category: Drug Development | Regulatory Affairs | Toxicology

September 4, 2012

A New Malaria Compound

Email This Entry

Posted by Derek

There have been many headlines in recent days about a potential malaria cure. I'm not sure what set these off at this time, since the paper describing the work came out back in the spring, but it's certainly worth a look.

This all came out of the Medicines for Malaria Venture, a nonprofit group that has been working with various industrial and academic groups in many areas of malaria research. This is funded through a wide range of donors (corporations, foundations, international agencies), and work has taken place all over the world. In this case (PDF), things began with a collection of about 36,000 compounds (biased towards kinase inhibitor scaffolds) from BioFocus in the UK. These were screened (high-throughput phenotypic readout) at the Eskitis Institute in Australia, and a series of compounds was identified for structure-activity studies. This phase of the work was a three-way collaboration between a chemistry team at the University of Cape Town (led by Prof. Kelly Chibale), biology assay teams at the Swiss Tropical and Public Health Institute, and pharmacokinetics at the Center for Drug Candidate Optimization at Monash University in Australia.
MMV%20compound%2015.png
An extensive SAR workup on the lead series identified some metabolically labile parts of the molecule over on that left-hand side pyridine. These could fortunately be changed without impairing the efficacy against the malaria parasites. The sulfonyl group seems to be required, as does the aminopyridine. These efforts led to the compound shown, MMV390048, which has good blood levels, passes in vitro safety tests, and is curative in a Plasmodium berghei mouse model at a single dose of 30 mg/kg. That's a very promising compound, from the looks of it, since that's better than the existing antimalarials can do. It's also active against drug-resistant strains, as well it might be (see below). Last month the MMV selected it for clinical development.

So how does this compound work? The medicinal chemists in the audience will have looked at that structure and said "kinase inhibitor", and that has to be where to put your money. That, in fact, appears to have been the entire motivation to screen the BioFocus collection. Kinase targets in Plasmodium have been getting attention for several years now; the parasite has a number of enzymes in this class, and they're different enough from human kinases to make attractive targets. (To that point, I have not been able to find results of this latest compound's profile when run against a panel of human kinases, although you'd think that this has surely been done by now). Importantly, none of the existing antimalarials work through such mechanisms, so the parasites have not had a chance to work up any resistance.

But resistance will come. It always does. The best hope for the kinase-based inhibitors is that they'll hit several malaria enzymes at once, which gives the organisms a bigger evolutionary barrier to jump over. The question is whether you can do that without hitting anything bad in the human kinome, but for the relatively short duration of acute malaria treatment, you should be able to get away with quite a bit. Throwing this compound and the existing antimalarials at the parasites simultaneously will really give them something to occupy themselves.

I'll follow the development of this compound with interest. It's just about to hit the really hard part of drug research - human beings in the clinic. This is where we have our wonderful 90% or so failure rates, although those figures are generally better for anti-infectives, as far as I can tell. Best of luck to everyone involved. I hope it works.

Comments (27) + TrackBacks (0) | Category: Drug Development | Infectious Diseases

August 31, 2012

Eli Lilly's Drumbeat of Bad News

Email This Entry

Posted by Derek

Eli Lilly has been getting shelled with bad news recently. There was the not-that-encouraging-at-all failure of its Alzheimer's antibody solanezumab to meet any of its clinical endpoints. But that's the good news, since that (at least according to the company) it showed some signs of something in some patients.

We can't say that about pomaglumetad methionil (LY2140023), their metabotropic glutamate receptor ligand for schizophrenia, which is being halted. The first large trial of the compound failed to meet its endpoint, and an interim analysis showed that the drug was unlikely to have a chance of making its endpoints in the second trial. It will now disappear, as will the money spent on it so far. (The first drug project I ever worked on was a backup for an antipsychotic with a novel mechanism, which also failed to do a damned thing in the clinic, and which experience perhaps gave me some of the ideas I have now about drug research).

This compound is an oral prodrug of LY404039, which has a rather unusual structure. The New York Times did a story about the drug's development a few years ago, which honestly makes rather sad reading in light of the current news. It was once thought to have great promise. Note the cynical statement in that last link about how it really doesn't matter if the compound works or not - but you know what? It did matter in the end. This was the first compound of its type, an attempt at a real innovation through a new mechanism to treat mental illness, just the sort of thing that some people will tell you that the drug industry never gets around to doing.

And just to round things off, Lilly announced the results of a head-to-head trial of its anticoagulant drug Effient versus (now generic) Plavix in acute coronary syndrome. This is the sort of trial that critics of the drug industry keep saying never gets run, by the way. But this one was, because Plavix is the thing to beat in that field - and Effient didn't beat it, although there might have been an edge in long-term followup.

Anticoagulants are a tough field - there are a lot of patients, a lot of money to be made, and a lot of room (in theory) for improvement over the existing agents. But just beating heparin is hard enough, without the additional challenge of beating cheap Plavix. It's a large enough patient population, though, that more than one drug is needed because of different responses.

There have been a lot of critics of Lilly's research strategy over the years, and a lot of shareholders have been (and are) yelling for the CEO's head. But from where I sit, it looks like the company has been taking a lot of good shots. They've had a big push in Alzheimer's, for example. Their gamma-secretase inhibitor, which failed in terrible fashion, was a first of its kind. Someone had to be the first to try this mechanism out; it's been a goal of Alzheimer's research for over twenty years now. Solanezumab was a tougher call, given the difficulties that Elan (and Wyeth/Pfizer, J&J, and so on) have had with that approach over the years. But immunology is a black box, different antibodies do different things in different people, and Lilly's not the only company trying the same thing. And they've been doggedly pursuing beta-secretase as well. These, like them or not, are still some of the best ideas that anyone has for Alzheimer's therapy. And any kind of win in that area would be a huge event - I think that Lilly deserves credit for having the nerve to go after such a tough area, because I can tell you that I've been avoiding it ever since I worked on it in the 1990s.

But what would I have spent the money on instead? It's not like there are any low-risk ideas crowding each other for attention. Lilly's portfolio is not a crazy or stupid one - it's not all wild ideas, but it's not all full of attempts to play it safe, either. It looks like the sort of thing any big (and highly competent) drug research organization could have ended up with. The odds are still very much against any drug making it through the clinic, which means that having three (or four, or five) in a row go bad on you is not an unusual event at all. Just a horribly unprofitable one.

Comments (26) + TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System

August 29, 2012

How Did the Big Deals of 2007 Work Out?

Email This Entry

Posted by Derek

Startup biopharma companies: they've gotta raise money, right? And the more money, the better, right? Not so right, according to this post by venture capitalist Bruce Booth. Companies need money, for sure, but above a certain threshold there's no correlation with success, either for the company's research portfolio or its early stage investors. (I might add that the same holds true for larger drug companies as well, for somewhat different reasons. Perhaps Pfizer's strategy over the last twenty years has had one (and maybe only one) net positive effect: it's proven that you cannot humungous your way to success in this business. And yes, since you ask, that's the last time I plan to use "humungous" as a verb for a while).

There's also a fascinating look back at FierceBiotech's 2007 "Top Deals", to see what became of the ten largest financing rounds on the list. Some of them have worked out, and some of them most definitely haven't: 4 of the ten were near-total losses. One's around break-even, two are "works in progress" but could come through, and three have provided at least 2x returns. (Read his post to attach names to these!) And as Booth shows, that's pretty much what you'd expect from the distribution over the entire biotech industry, including all the wild-eyed stuff and the riskiest small fry. Going with the biggest, most lucratively financed companies bought you, in this case, no extra security at all.

A note about those returns: one of the winners on the list is described as having paid out "modest 2x returns" to the investors. That's the sort of quote that inspires outrage among the clueless, because (of course) a 100% profit is rather above the market returns for the last five years. But the risk/reward ratio has not been repealed. You could have gotten those market returns by doing nothing, just by parking the cash in a couple of index funds and sitting back. Investing in startup companies requires a lot more work, because you're taking on a lot more risk.

It was not clear which of those ten big deals in 2007 would pay out, to put it mildly. In fact, if you take Booth's figures so far, an equal investment in each of the top seven companies on the list in 2007 would leave you looking at a slight net loss to date, and that includes one company that would have paid you back at about 3x to 4x. Number eight was the big winner on the list (5x, if you got out at the perfect peak, and good luck with that), and number 9 is the 2x return (while #10 is ongoing, but a likely loss). As any venture investor knows, you're looking at a significant risk of losing your entire investment whenever you back a startup, so you'd better (a) back more than one and (b) do an awful lot of thinking about which ones those are. This is a job for the deeply pocketed.

And when you think about it, a very similar situation obtains inside a given drug company. The big difference is that you don't have the option of not playing the game - something always has to be done. There are always projects going, some of which look more promising than others, some of which will cost more to prosecute than others, and some of which are aimed at different markets than others. You might be in a situation where there are several that look like they could be taken on, but your development organization can't handle so many. What to do? Partner something, park something that can wait (if anything can)?Or you might have the reverse problem, of not enough programs that look like they might work. Do you push the best of a bad lot forward and hope for the best? If not, do you still pay your development people even if they have nothing to develop right now, in the hopes that they soon will?

Which of these clinical programs of yours have the most risk? The biggest potential? Have you balanced those properly? You're sure to lose your entire investment on the majority - the great majority - of them, so choose as wisely as you can. The ones that make it through are going to have to pay for all the others, because if they don't, everyone's out of a job.

This whole process, of accumulating capital and risking it on new ventures, is important enough that we've named an entire economic system for it. It's a high-wire act. Too cautious, and you might not keep up enough to survive. Too risky, and you could lose too much. They do focus one's attention, such prospects, and the thought that other companies are out there trying to get a step on you helps keep you moving, too. It's not a pretty system, but it isn't supposed to be. It's supposed to work.

Comments (1) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

August 23, 2012

Pharma: Geniuses or Con Men?

Email This Entry

Posted by Derek

So here's a comment to this morning's post on stock buybacks, referring both to it and my replies to Donald Light et al. last week. I've added links:

Did you not spend two entire posts last week telling readers how only pharma "knows" how to do drug research and that we should "trust" them and their business model. Now you seem to say that they are either incompetent or conmen looking for a quick buck. So what is it? Does pharma (as it exists today) have a good business model or are they conmen/charlatans out for money? Do they "know" what they are doing? Or are they faking competence?

False dichotomy. My posts on the Donald Light business were mostly to demonstrate that his ideas of how the drug industry works are wrong. I was not trying to prove that the industry itself is doing everything right.

That's because it most certainly isn't. But it is the only biopharma industry we have, and before someone comes along with a scheme to completely rework it, one should ask whether that's a good idea. In this very context, the following quote from Chesterton has been brought up, and it's very much worth keeping in mind:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."

This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable. It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. But the truth is that nobody has any business to destroy a social institution until he has really seen it as an historical institution. If he knows how it arose, and what purposes it was supposed to serve, he may really be able to say that they were bad purposes, that they have since become bad purposes, or that they are purposes which are no longer served. But if he simply stares at the thing as a senseless monstrosity that has somehow sprung up in his path, it is he and not the traditionalist who is suffering from an illusion.

The drug industry did not arise out of random processes; it looks the way it does now because of a long, long series of decisions. Because we live in a capitalist system, many of these decisions were made to answer the question "Which way would make more money?" That is not guaranteed to give you the best outcome. But neither is it, as some people seem to think, a guarantee of the worst one. Insofar as the need for new and effective drugs is coupled to the ability to make money by doing so, I think the engine works about as well as anything could. Where these interests decouple (tropical diseases, for one), we need some other means.

My problem with stock buybacks is that I think that executives are looking at that same question ("Which way would make more money?") and answering it incorrectly. But under current market conditions, there are many values of "wrong". In the long run, I think (as does Bruce Booth) that it would be more profitable, both for individual companies and for the industry as a whole, to invest more in research. In fact, I think that's the only thing that's going to get us out of the problems that we're in. We need to have more reliable, less expensive ways to discover and develop drugs, and if we're not going to find those by doing research on how to make them happen, then we must be waiting for aliens to land and tell us.

But that long run is uncertain, and may well be too long for many investors. Telling the shareholders that Eventually Things Will Be Better, We Think, Although We're Not Sure How Just Yet will not reassure them, especially in this market. Buying back shares, on the other hand, will.

Comments (22) + TrackBacks (0) | Category: Business and Markets | Drug Development

August 21, 2012

Genentech's Big Worry: Roche?

Email This Entry

Posted by Derek

There's no telling if this is true - it's part of a lawsuit. But a former Genentech employee is claiming that the company rushed trials of its PI3K inhibitor. And why? Worries about their partner:

The suit alleges that the Pi3 Kinase team was guilty of "illegal and unethical conduct" by skirting established scientific and ethical standards required of drug researchers. Juliet Kniley claims she complained in 2008 and then was sidelined in 2009 with a demotion after being instructed to push ahead on the study. And she says she was told twice that Roche would "take this molecule away from us" if they saw her proposed timelines.

Genentech denies the allegations. But you have to wonder if there's still a window here into the relationship between the two companies. . .

Comments (24) + TrackBacks (0) | Category: Drug Development

August 17, 2012

Good Forum for a Response on Drug Innovation?

Email This Entry

Posted by Derek

I wanted to mention that a version of my first post on the Light/Lexchin article is now up over at the Discover magazine site. And if you've been following the comments to that one and to Light's response here, you'll note that readers here have found a number of problems with the original paper's analysis. I've found a few of my own, and I expect there are more.

The British Medical Journal has advised me that they consider a letter to the editor to be the appropriate forum for a response to one of their published articles. I don't think publishing this one did them much credit, but what's done is done. I'm still shopping for a venue for a detailed response on my part - I've had a couple of much-appreciated offers, but I'd like to continue to see what options are out there to get this out to the widest possible audience.

Comments (23) + TrackBacks (0) | Category: Drug Development | Drug Prices

August 15, 2012

A Quick Tour Through Drug Development Reality

Email This Entry

Posted by Derek

I wanted to let people know that I'm working on a long, detailed reply to Donald Light's take on drug research, but that I'm also looking at a few other publication venues for it. More on this as it develops.

But in trying to understand his worldview (and Marcia Angell's, et al.), I think I've hit on at least one fundamental misconception that these people have. All of them seem to think that the key step in drug discovery is target ID - once you've got a molecular target, you're pretty much home free, and all that was done by NIH money, etc., etc. It seems that these people have a very odd idea about high-throughput screening: they seem to think that we screen our vast collections of molecules and out pops a drug.

Of course, out is what a drug does not pop, if you follow my meaning. What pops out are hits, some of which are not what they say on the label any more. And some of the remaining ones just don't reproduce when you run the same experiment again. And even some of the ones that do reproduce are showing up as hits not because they're affecting your target, but because they're hosing up your assay by some other means. Once you've cleared all that underbrush out, you can start to talk about leads.

Those lead molecules are not created equal, either. Some of them are more potent than others, but the more potent ones might be much higher molecular weights (and thus not as ligand efficient). Or they might be compounds from another project and already known to hit a target that you don't want to hit. Once you pick out the ones that you actually want to do some chemistry on, you may find, as you start to test new molecules in the series, that some of them have more tractable structure-activity relationships than others. There are singletons out there, or near-singletons: compounds that have some activity as they stand, but for which every change in structure represents a step down. The only way to find that out is to test analogs. You might have some more in your files, or you might be able to buy some from the catalogs. But in many cases, you'll have to make them yourself, and a significant number of those compounds you make will be dead ends. You need to know which ones, though, so that's valuable information.

Now you're all the way up to lead series territory, a set of compounds that look like they can be progressed to be more potent and more selective. As medicinal chemists know, though, there's more to life. You need to see how these compounds act on real cells, and in real animals. Do they attain reasonable blood levels? Why or why not? What kinds of metabolites do they produce - are those going to cause trouble? What sort of toxicity do you see at higher doses, or more long-running ones? Is that related to your mechanism of action (sorry to hear it!), or something off-target to do with that particular structure? Can you work your way out of that problem with more new compound variations without losing all of what you've been building in so far? Prepare to go merrily chasing down some blind alleys while you work all this stuff out; the lights are turned off inside the whole maze, and the only illumination is what you can bring yourself.

Now let's assume that you've made it far enough to narrow down to one single compound, the clinical candidate. The fun begins! How about formulations - can this compound be whipped up into a solid form that resembles a real drug that people can put in their mouths, leave on their medicine cabinet shelves, and stock in their warehouses and pharmacies? Can you make enough of the compound to get to that stage, reliably? Most of the time the chemistry has to change at that point, and you'd better hope that some tiny new impurities from the new route aren't going to pop up and be important. You'd really better hope that some new solid form (polymorph) of your substance doesn't get discovered during that new route, because some of those are bricks and their advent is nearly impossible to predict.

Hey, now it's time to go to the clinic. Break out the checkbook, because the money spent here is going to make the preclinical expenses look like roundoff errors. Real human beings are going to take your compound, and guess what? Of all the compounds (the few, the proud) that actually get this far, all the way up to some volunteer's tongue. . .well, a bit over ninety per cent of those are going to fail in trials. Good luck!

While you're nervously checking the clinical results (blood levels and tolerability in Phase I), you have more questions to ask. Do you have good commercial suppliers for all the starting materials, and the right manufacturing processes in place to make the drug, formulate it, and package it? High time you thought about that stuff; your compound is about to go into the first sick humans it's ever seen, in Phase II. You finally get to find out if that target, that mechanism, actually works in people. And if it does (congratulations!), then comes the prize. You get to spend the real money in Phase III: lots and lots of patients, all sorts of patients, in what's supposed to be a real-world shakedown. Prepare to shell out more than you've spent in the whole process to date, because Phase III trials will empty your pockets for sure.

Is your compound one of the five or ten out of a hundred that makes it through Phase III? Enjoy the sensation, because most medicinal chemists experience that only once in their careers, if that. Now you're only a year or two away from getting your drug through the FDA and seeing if it will succeed or fail on the market. And good luck there, too. Contrary to what you might read, not all drugs earn back their costs, so the ones that do had better earn out big-time.

There. That wasn't so easy, was it? And I know that I've left things out, too. The point of all this is that most people have no idea of all these steps - what they're like, how long they can take, that they even exist. It wouldn't surprise me if many people imagine drug discovery, when they imagine it at all, to be the reach-in-the-pile-and-find-a-drug process that I mentioned in the second paragraph. Everything else is figuring out what color to make the package and how much to overcharge for it.

That's why I started this blog back in 2002 - because I was spending all my time on a fascinating, tricky, important job that no one seemed to know anything about. All these details consume the lives and careers of vast numbers of researchers - it's what I've been doing since 1989 - and I wanted, still want, to let people know that we exist.

In the meantime, for the Donald Lights of the world, the Marcia Angells, and the people who repeat their numbers despite apparently knowing nothing about how drugs actually get developed - well, here are some more details for you. The readers of this site with experience in the field will be able to tell you if I haven't described it pretty much as it is. It's not like I and others haven't tried to tell you before.

Comments (60) + TrackBacks (0) | Category: Drug Development | Drug Prices

August 13, 2012

Donald Light Responds on Drug Innovation and Costs

Email This Entry

Posted by Derek

Here's a response from Prof. Light to my post the other day attacking his positions on drug research. I've taken it out of that comments thread to highlight it - he no longer has to wonder if I'll let people here read what he has to say.

I'll have a response as well, but that'll most likely be up tomorrow - I actually have a very busy day ahead of me in the lab, working on a target that (as far as any of us in my group can tell) no one has ever attacked, for a disease that (as far as any of us in my group can tell) no one has ever found a therapy. And no, I am not making that up.

It's hard to respond to so many sarcastic and baiting trashings by Dr. Lowe and some of his fan club, but let me try. I wonder if Dr. Lowe allows his followers to read what I write here without cutting and editing.

First, let me clarify some of the mis-representations about the new BMJ article that claims the innovation crisis is a myth. While the pharmaceutical industry and its global network of journalists have been writing that the industry has been in real trouble because innovation has been dropping, all those articles and figures are based on the decline of new molecules approved since a sharp spike. FDA figures make it clear that the so-called crisis has been simply a return to the long-term average. In fact, in recent years, companies have been getting above-average approvals for new molecules. Is there any reasonably argument with these FDA figures? I see none from Dr. Lowe or in the 15 pages of comments.

Second, the reported costs of R&D have been rising sharply, and we do not go into these; but here are a couple of points. We note that the big picture, total additional investments in R&D (which are self-reported from closely held figures) over the past 15 years were matched by six times greater increase in revenues. We can all guess various reasons why, but surely a 6-fold return is not a crisis or "unsustainable." In fact, it's evidence that companies know what they are doing.

Another point from international observers is that the costs of clinical trials in the U.S. are much higher than in equally affluent countries and much higher than they need to be, because everyone seems to make money the higher they are in the U.S. market. I have not looked into this but I think it would be interesting to see in what ways costly clinical trials are a boon for several of the stakeholders.

Third, regarding that infamously low cost of R&D that Dr. Lowe and readers like to slam, consider this: The low estimate is based on the same costs of R&D reported by companies (which are self-reported from closely held figures) to their leading policy research center as were used to estimate the average cost is $1.3 bn (and soon to be raised again). Doesn't that make you curious enough to want to find out how we show what inflators were used to ramp the reported costs up, which use to do the same in reverse? Would it be unfair to ask you to actually read how we took this inflationary estimate apart? Or is it easier just to say our estimate is "idiotic" and "absurd"? How about reading the whole argument at www.pharmamyths.net and then discuss its merits?

Our estimate is for net, median corporate cost of D(evelopment) for that same of drugs from the 1990s that the health economists supported by the industry used to ramp up the high estimate. Net, because taxpayer subsidies which the industry has fought hard to expand pay for about 44% of gross R&D costs. Median, because a few costly cases which are always featured raise the average artificially. Corporate, because a lot of R(eseach) and some D is paid for by others "“ governments, foundations, institutes. We don't include an estimate for R(eseach) because no one knows what it is and it varies so much from a chance discovery that costs almost nothing to years and decades of research, failures, dead ends, new angles, before finally an effective drug is discovered.

So it's an unknown and highly variable R plus more knowable estimate of net, median, corporate costs. Even then, companies never so show their books, and they never compare their costs of R&D to revenues and profits. They just keep telling us their unverifiable costs of R&D are astronomical.

We make clear that neither we nor anyone else knows either the average gross cost or the net, median costs of R&D because major companies have made sure we cannot. Further, the "average cost of R&D" estimate began in 1976 as a lobbying strategy to come up with an artificial number that could be used to wow Congressmen. It's worked wonderfully, mythic as it may be.

Current layoffs need to be considered (as do most things) from a 10-year perspective. A lot industry observers have commented on companies being "bloated" and adding too many hires. Besides trimming back to earlier numbers, the big companies increasingly realize (it has taken them years) that it's smarter to let thousands of biotechs and research teams try to find good new drugs, rather than doing it in-house. To regard those layoffs as an abandonment of research misconstrues the corporate strategies.

Fourth, we never use "me-too." We speak of minor variations, and we say it's clinically valuable to have 3-4 in a given therapeutic class, but marginal gains fall quite low after that.

Fifth, our main point about innovation is that current criteria for approval and incentives strongly reward companies doing exactly what they are doing, developing scores of minor variations to fill their sales lines and market for good profits. We don't see any conspiracy here, only rational economic behavior by smart businessmen.

But while all new drug products are better than placebo or not too worse than a comparator, often against surrogate end points, most of those prove to be little better than last year's "better" drugs, or the years before"¦ You can read detailed assessments by independent teams at several sites. Of course companies are delighted when new drugs are really better against clinical outcomes; but meantime we cite evidence that 80 percent of additional pharmaceutical costs go to buying newly patented minor variations. The rewards to do anything to get another cancer drug approved are so great that independent reviewers find few of them help patients much, and the area is corrupted by conflict-of-interest marketing.

So we conclude there is a "hidden business model" behind the much touted business model, to spend billions on R&D to discover breakthrough drugs that greatly improve health and works fine until the "patent cliff" sends the company crashing to the canyon floor. The heroic tale is true to some extent and sometimes; but the hidden business model is to develop minor variations and make solid profits from them. That sounds like rational economic behavior to me.
The trouble is, all these drugs are under-tested for risks of harm, and all drugs are toxic to one degree or another. My book, The Risks of Prescription Drugs, assembles evidence that there is an epidemic of harmful side effects, largely from hundreds of drugs with few or no advantages to offset their risks of harm.

Is that what we want? My neighbors want clinically better drugs. They think the FDA approves clinically better drugs and don't realize that's far from the case. Most folks think "innovation" means clinically superior, but it doesn't. Most new molecules do not prove to be clinically superior. The term "innovation" is used vaguely to signal better drugs for patients; but while many new drugs are technically innovative, they do not help patients much. The false rhetoric of "innovative" and "innovation" needs to be replaced by what we want and mean: "clinically superior drugs."

If we want clinically better drugs, why don't we ask for them and pay according to added value "“ no more if no better and a lot more if substantially better? Instead, standards for testing effectiveness and risk of harms is being lowered, and "“ guess what "“ that will reward still more minor variations by rational economic executives, not more truly superior "innovative" drugs.

I hope you find some of these points worthwhile and interesting. I'm trying to reply to 20 single-space pages of largely inaccurate criticism, often with no reasoned explanation for a given slur or dismissal. I hope we can do better than that. I thought the comments by Matt #27 and John Wayne #45 were particularly interesting.

Donald W. Light

Comments (72) + TrackBacks (0) | Category: "Me Too" Drugs | Drug Development | Drug Prices

August 9, 2012

Getting Drug Research Really, Really Wrong

Email This Entry

Posted by Derek

The British Medical Journal says that the "widely touted innovation crisis in pharmaceuticals is a myth". The British Medical Journal is wrong.

There, that's about as direct as I can make it. But allow me to go into more detail, because that's not the the only thing they're wrong about. This is a new article entitled "Pharmaceutical research and development: what do we get for all that money?", and it's by Joel Lexchin (York University) and Donald Light of UMDNJ. And that last name should be enough to tell you where this is all coming from, because Prof. Light is the man who's publicly attached his name to an estimate that developing a new drug costs about $43 million dollars.

I'm generally careful, when I bring up that figure around people who actually develop drugs, not to do so when they're in the middle of drinking coffee or working with anything fragile, because it always provokes startled expressions and sudden laughter. These posts go into some detail about how ludicrous that number is, but for now, I'll just note that it's hard to see how anyone who seriously advances that estimate can be taken seriously. But here we are again.

Light and Lexchin's article makes much of Bernard Munos' work (which we talked about here), which shows a relatively constant rate of new drug discovery. They should go back and look at his graph, because they might notice that the slope of the line in recent years has not kept up with the historical rate. And they completely leave out one of the other key points that Munos makes: that even if the rate of discovery were to have remained linear, the costs associated with it sure as hell haven't. No, it's all a conspiracy:

"Meanwhile, telling "innovation crisis" stories to politicians and the press serves as a ploy, a strategy to attract a range of government protections from free market, generic competition."

Ah, that must be why the industry has laid off thousands and thousands of people over the last few years: it's all a ploy to gain sympathy. We tell everyone else how hard it is to discover drugs, but when we're sure that there are no reporters or politicians around, we high-five each other at how successful our deception has been. Because that's our secret, according to Light and Lexchin. It's apparently not any harder to find something new and worthwhile, but we'd rather just sit on our rears and crank out "me-too" medications for the big bucks:

"This is the real innovation crisis: pharmaceutical research and development turns out mostly minor variations on existing drugs, and most new drugs are not superior on clinical measures. Although a steady stream of significantly superior drugs enlarges the medicine chest from which millions benefit, medicines have also produced an epidemic of serious adverse reactions that have added to national healthcare costs".

So let me get this straight: according to these folks, we mostly just make "minor variations", but the few really new drugs that come out aren't so great either, because of their "epidemic" of serious side effects. Let me advance an alternate set of explanations, one that I call, for lack of a better word, "reality". For one thing, "me-too" drugs are not identical, and their benefits are often overlooked by people who do not understand medicine. There are overcrowded therapeutic areas, but they're not common. The reason that some new drugs make only small advances on existing therapies is not because we like it that way, and it's especially not because we planned it that way. This happens because we try to make big advances, and we fail. Then we take what we can get.

No therapeutic area illustrates this better than oncology. Every new target in that field has come in with high hopes that this time we'll have something that really does the job. Angiogenesis inhibitors. Kinase inhibitors. Cell cycle disruptors. Microtubules, proteosomes, apoptosis, DNA repair, metabolic disruption of the Warburg effect. It goes on and on and on, and you know what? None of them work as well as we want them to. We take them into the clinic, give them to terrified people who have little hope left, and we watch as we provide with them, what? A few months of extra life? Was that what we were shooting for all along, do we grin and shake each others' hands when the results come in? "Another incremental advance! Rock and roll!"

Of course not. We're disappointed, and we're pissed off. But we don't know enough about cancer (yet) to do better, and cancer turns out to be a very hard condition to treat. It should also be noted that the financial incentives are there to discover something that really does pull people back from the edge of the grave, so you'd think that we money-grubbing, public-deceiving, expense-padding mercenaries might be attracted by that prospect. Apparently not.

The same goes for Alzheimer's disease. Just how much money has the industry spent over the last quarter of a century on Alzheimer's? I worked on it twenty years ago, and God knows that never came to anything. Look at the steady march, march, march of failure in the clinic - and keep in mind that these failures tend to come late in the game, during Phase III, and if you suggest to anyone in the business that you can run an Alzheimer's Phase III program and bring the whole thing in for $43 million dollars, you'll be invited to stop wasting everyone's time. Bapineuzumab's trials have surely cost several times that, and Pfizer/J&J are still pressing on. And before that you had Elan working on active immunization, which is still going on, and you have Lilly's other antibody, which is still going on, and Genentech's (which is still going on). No one has high hopes for any of these, but we're still burning piles of money to try to find something. And what about the secretase inhibitors? How much time and effort has gone into beta- and gamma-secretase? What did the folks at Lilly think when they took their inhibitor way into Phase III only to find out that it made Alzheimer's slightly worse instead of helping anyone? Didn't they realize that Professors Light and Lexchin were on to them? That they'd seen through the veil and figured out the real strategy of making tiny improvements on the existing drugs that attack the causes of Alzheimer's? What existing drugs to target the causes of Alzheimer are they talking about?

Honestly, I have trouble writing about this sort of thing, because I get too furious to be coherent. I've been doing this sort of work since 1989, and I have spent the great majority of my time working on diseases for which no good therapies existed. The rest of the time has been spent on new mechanisms, new classes of drugs that should (or should have) worked differently than the existing therapies. I cannot recall a time when I have worked on a real "me-too" drug of the sort of that Light and Lexchin seem to think the industry spends all its time on.

That's because of yet another factor they have not considered: simultaneous development. Take a look at that paragraph above, where I mentioned all those Alzheimer's therapies. Let's be wildly, crazily optimistic and pretend that bapineuzumab manages to eke out some sort of efficacy against Alzheimer's (which, by the way, would put it right into that "no real medical advance" category that Light and Lexchin make so much of). And let's throw caution out the third-floor window and pretend that Lilly's solanezumab actually does something, too. Not much - there's a limit to how optimistic a person can be without pharmacological assistance - but something, some actual efficacy. Now here's what you have to remember: according to people like the authors of this article, whichever of these antibodies that makes it though second is a "me-too" drug that offers only an incremental advance, if anything. Even though all this Alzheimer's work was started on a risk basis, in several different companies, with different antibodies developed in different ways, with no clue as to who (if anyone) might come out on top.

All right, now we get to another topic that articles like this latest one are simply not complete without. That's right, say it together: "Drug companies spend a lot more on marketing than they do on research!" Let's ignore, for the sake of argument, the large number of smaller companies that spend all of their money on R&D and none on marketing, because they have nothing to market yet. Let's even ignore the fact that over the years, the percentage of money being spent on drug R&D has actually been going up. No, let's instead go over this in a way that even professors at UMDNJ and York can understand it:

Company X spends, let's say, $10 a year on research. (We're lopping off a lot of zeros to make this easier). It has no revenues from selling drugs yet, and is burning through its cash while it tries to get its first on onto the market. It succeeds, and the new drug will bring in $100 dollars a year for the first two or three years, before the competition catches up with some of the incremental me-toos that everyone will switch to for mysterious reasons that apparently have nothing to do with anything working better. But I digress; let's get back to the key point. That $100 a year figure assumes that the company spends $30 a year on marketing (advertising, promotion, patient awareness, brand-building, all that stuff). If the company does not spend all that time and effort, the new drug will only bring in $60 a year, but that's pure profit. (We're going to ignore all the other costs, assuming that they're the same between the two cases).

So the company can bring in $60 dollars a year by doing no promotion, or it can bring in $70 a year after accounting for the expenses of marketing. The company will, of course, choose the latter. "But," you're saying, "what if all that marketing expense doesn't raise sales from $60 up to $100 a year?" Ah, then you are doing it wrong. The whole point, the raison d'etre of the marketing department is to bring in more money than they are spending. Marketing deals with the profitable side of the business; their job is to maximize those profits. If they spend more than those extra profits, well, it's time to fire them, isn't it?

R&D, on the other hand, is not the profitable side of the business. Far from it. We are black holes of finance: huge sums of money spiral in beyond our event horizons, emitting piteous cries and futile streams of braking radiation, and are never seen again. The point is, these are totally different parts of the company, doing totally different things. Complaining that the marketing budget is bigger than the R&D budget is like complaining that a car's passenger compartment is bigger than its gas tank, or that a ship's sail is bigger than its rudder.

OK, I've spend about enough time on this for one morning; I feel like I need a shower. Let's get on to the part where Light and Lexchin recommend what we should all be doing instead:

What can be done to change the business model of the pharmaceutical industry to focus on more cost effective, safer medicines? The first step should be to stop approving so many new drugs of little therapeutic value. . .We should also fully fund the EMA and other regulatory agencies with public funds, rather than relying on industry generated user fees, to end industry’s capture of its regulator. Finally, we should consider new ways of rewarding innovation directly, such as through the large cash prizes envisioned in US Senate Bill 1137, rather than through the high prices generated by patent protection. The bill proposes the collection of several billion dollars a year from all federal and non-federal health reimbursement and insurance programmes, and a committee would award prizes in proportion to how well new drugs fulfilled unmet clinical needs and constituted real therapeutic gains. Without patents new drugs are immediately open to generic competition, lowering prices, while at the same time innovators are rewarded quickly to innovate again. This approach would save countries billions in healthcare costs and produce real gains in people’s health.

One problem I have with this is that the health insurance industry would probably object to having "several billion dollars a year" collected from it. And that "several" would not mean "two or three", for sure. But even if we extract that cash somehow - an extraction that would surely raise health insurance costs as it got passed along - we now find ourselves depending on a committee that will determine the worth of each new drug. Will these people determine that when the drug is approved, or will they need to wait a few years to see how it does in the real world? If the drug under- or overperforms, does the reward get adjusted accordingly? How, exactly, do we decide how much a diabetes drug is worth compared to one for multiple sclerosis, or TB? What about a drug that doesn't help many people, but helps them tremendously, versus a drug that's taken by a lot of people, but has only milder improvements for them? What if a drug is worth a lot more to people in one demographic versus another? And what happens as various advocacy groups lobby to get their diseases moved further up the list of important ones that deserve higher prizes and more incentives?

These will have to be some very, very wise and prudent people on this committee. You certainly wouldn't want anyone who's ever been involved with the drug industry on there, no indeed. And you wouldn't want any politicians - why, they might use that influential position to do who knows what. No, you'd want honest, intelligent, reliable people, who know a tremendous amount about medical care and pharmaceuticals, but have no financial or personal interests involved. I'm sure there are plenty of them out there, somewhere. And when we find them, why stop with drugs? Why not set up committees to determine the true worth of the other vital things that people in this country need each day - food, transportation, consumer goods? Surely this model can be extended; it all sounds so rational. I doubt if anything like it has ever been tried before, and it's certainly a lot better than the grubby business of deciding prices and values based on what people will pay for things (what do they know, anyway, compared to a panel of dispassionate experts?)

Enough. I should mention that when Prof. Light's earlier figure for drug expense came out that I had a brief correspondence with him, and I invited him to come to this site and try out his reasoning on people who develop drugs for a living. Communication seemed to dry up after that, I have to report. But that offer is still open. Reading his publications makes me think that he (and his co-authors) have never actually spoken with anyone who does this work or has any actual experience with it. Come on down, I say! We're real people, just like you. OK, we're more evil, fine. But otherwise. . .

Comments (74) + TrackBacks (0) | Category: "Me Too" Drugs | Business and Markets | Cancer | Drug Development | Drug Industry History | Drug Prices | The Central Nervous System | Why Everyone Loves Us

July 16, 2012

AstraZeneca Admits It Spent Too Much Money

Email This Entry

Posted by Derek

Looks like AstraZeneca's internal numbers agree with Matthew Herper's. The company was talking about its current R&D late last week, and this comment stands out:

Discovery head Mene Pangalos told reporters on Thursday that mistakes had been in the past by encouraging quantity over quality in early drug selection.

"If you looked at our output in terms of numbers of candidates entering the clinic, we were one of the most productive companies in the world, dollar for dollar. If you rated us by how many drugs we launched, we were one of the least successful," he said.

Yep, sending compounds to the clinic is easy - you just declare them to be Clinical Candidates, and the job is done. Getting them through the clinic, now, that's harder, because at that point you're encountering things that can't be rah-rah-ed. Viruses and bacteria, neurons and receptors and tumor cells, they don't care so much about your goals statement and your Corporate Commitment to Excellence. In the end, that's one of the things I like most about research: the real world has the last laugh.

The news aggregator Biospace has a particularly misleading headline on all this: "AstraZeneca Claims Neuroscience Shake-Up is Paying Off ; May Advance at Least 8 Drugs to Final Tests by 2015". I can't find anyone from AZ putting it in quite those terms, fortunately. That would be like saying that my decision, back in Boston, to cut costs by not filling my gas tank is paying off as I approach Philadelphia.

Comments (27) + TrackBacks (0) | Category: Business and Markets | Clinical Trials | Drug Development

June 26, 2012

The Next Five Years in the Drug Industry

Email This Entry

Posted by Derek

Nature Reviews Drug Discovery has an article on the current state of drug development, looking at what's expected to be launched from 2012 to 2016. There's a lot of interesting information, but this is the sentence that brought me up short: "the global pipeline has stopped growing". The total number of known projects in the drug industry (preclinical to Phase III) now appears to have peaked in 2009, at just over 7700. It's now down to 7400, and the biggest declines are in the early stages, so the trend is going to continue for a while.

But before we all hit the panic button, it looks like this is a somewhat artificial decline, since it was based on an artificial peak. In 2006, the benchmark year for the 2007-2011 cohort of launched drugs, there were only about 6100 projects going. I'm not sure what led to the rise over the next three years after that, but we're still running higher. So while I can't say that it's healthy that the number of projects has been declining, we may be largely looking at some sort of artifact in the data. Worth keeping an eye on.

And the authors go on to say that this larger number of new projects, compared to the previous five-year period, should in fact lead to a slight rise in the number of new drugs approved, even if you assume that the success rates drop off a bit. They're guessing 30 to 35 launches per year, well above the post-2000 average. Peak sales for these new products, though, are probably not going to match the historical highs, so that needs to be taken into account.

More data: the coming cohort of new drugs is expected to be a bit more profitable, and a bit more heavily weighted towards small molecules rather than biologics. Two-thirds of the revenues from this coming group are expected to be from drugs that are already in some sort of partnership arrangement, and you'd have to think that this number will increase further for the later-blooming candidates. The go-it-alone blockbuster compound really does seem to be a relative rarity - the complexity and cost of large clinical trials, and the worldwide regulatory and marketing landscape have seen to that.

As for therapeutic area, oncology has the highest number of compounds in development (26% of them as of 2011). It's to the point that the authors wonder if there's an "oncology bubble" on the way, since there are between 2 and 3 compounds chasing each major oncology target. Personally, I think that these compounds are probably still varied enough to make places for themselves, considering the wildly heterogeneous nature of the market. But it's going to be a messy process, figuring out what compounds are useful for which cases.

So in the near term, overall, it looks like things are going to hold together. Past that five-year mark, though, predictions get fuzzier, and the ten-year situation is impossible to forecast at all. That, in fact, is going to be up to those of us doing early research. The shape we're in by that time will be determined, perhaps, by what we go out into the labs and do today. I have a tool compound to work up, to validate (I hope) an early assay, and another project to pay attention to this afternoon. 2022 is happening now.

Update: here are John LaMattina's thoughts on this analysis, asking about some things that may not have been taken into account.

Comments (16) + TrackBacks (0) | Category: Business and Markets | Drug Development

June 13, 2012

Live By The Bricks, Die By The Bricks

Email This Entry

Posted by Derek

I wanted to highlight a couple of recent examples from the literature to show what happens (all too often) when you start to optimize med-chem compounds. The earlier phases of a project tend to drive on potency and selectivity, and the usual way to get these things is to add more stuff to your structures. Then as you start to produce compounds that make it past those important cutoffs, your focus turns more to pharmacokinetics and metabolism, and sometimes you find you've made your life rather difficult. It's an old trap, and a well-known one, but that doesn't stop people from sticking a leg into it.

Take a look at these two structures from ACS Chemical Biology. The starting structure is a pretty generic-looking kinase inhibitor, and as the graphic to its left shows, it does indeed hit a whole slew of kinases. These authors extended the structure out to another loop of the their desired target, c-Src, and as you can see, they now have a much more selective compound.
kinase%20inhibitor.png
But at such a price! Four more aromatic rings, including the dread biphenyl, and only one sp3 carbon in the lot. The compound now tips the scales at MW 555, and looks about as soluble as the Chrysler building. To be fair, this is an academic group, which mean that they're presumably after a tool compound. That's a phrase that's used to excuse a lot of sins, but in this case they do have cellular assay data, which means that despite this compound's properties, it's managing to do something. Update: see this comment from the author on this very point. Be warned, though, if you're in drug discovery and you follow this strategy. Adding four flat rings and running up the molecular weight might work for you, but most of the time it will only lead to trouble - pharmacokinetics, metabolic clearance, toxicity, formulation.

My second example is from a drug discovery group (Janssen). They report work on a series of gamma-secretase modulators (GSMs) for Alzheimer's. You can tell from the paper that they had quite a wild ride with these things - for one, the activity in their mouse model didn't seem to correlate at all with the concentration of the compounds in the brain. Looking at those structures, though, you have to think that trouble is lurking, and so it is.
secretase.png

"In all chemical classes, the high potency was accompanied by high lipophilicity (in general, cLogP >5) and a TPSA [topological polar surface area] below 75 Å, explaining the good brain penetration. However, the majority of compounds also suffered from hERG binding with IC50s below 1 μM, CyP inhibition and low solubility, particularly at pH = 7.4 (data not shown). These unfavorable ADME properties can likely be attributed to the combination of high lipophilicity and low TPSA.

That they can. By the time they got to that compound 44, some of these problems had been solved (hERG, CyP). But it's still a very hard-to-dose compound (they seem to have gone with a pretty aggressive suspension formulation) and it's still a greasy brick, despite its impressive in vivo activity. And that's my point. Working this way exposes you to one thing after another. Making greasy bricks often leads to potent in vitro assay numbers, but they're harder to get going in vivo. And if you get them to work in the animals, you often face PK and metabolic problems. And if you manage to work your way around those, you run a much higher risk of nonspecific toxicity. So guess what happened here? You have to go to the very end of the paper to find out:

As many of the GSMs described to date, the series detailed in this paper, including 44a, is suffering from suboptimal physicochemical properties: low solubility, high lipophilicity, and high aromaticity. For 44a, this has translated into signs of liver toxicity after dosing in dog at 20 mg/kg. Further optimization of the drug-like properties of this series is ongoing.

Back to the drawing board, in other words. I wish them luck, but I wonder how much of this structure is going to have to be ripped up and redone in order to get something cleaner?

Comments (39) + TrackBacks (0) | Category: Alzheimer's Disease | Cancer | Drug Development | Pharmacokinetics | Toxicology

June 12, 2012

Predicting Toxicology

Email This Entry

Posted by Derek

One of the major worries during a clinical trial is toxicity, naturally. There are thousands of reasons a compound might cause problem, and you can be sure that we don't have a good handle on most of them. We screen for what we know about (such as hERG channels for cardiovascular trouble), and we watch closely for signs of everything else. But when slow-building low-incidence toxicity takes your compound out late in the clinic, it's always very painful indeed.

Anything that helps to clarify that part of the business is big news, and potentially worth a lot. But advanced in clinical toxicology come on very slowly, because the only thing worse than not knowing what you'll find is thinking that you know and being wrong. A new paper in Nature highlights just this problem. The authors have a structural-similarity algorithm to try to test new compounds against known toxicities in previously tested compounds, which (as you can imagine) is an approach that's been tried in many different forms over the years. So how does this one fare?

To test their computational approach, Lounkine et al. used it to estimate the binding affinities of a comprehensive set of 656 approved drugs for 73 biological targets. They identified 1,644 possible drug–target interactions, of which 403 were already recorded in ChEMBL, a publicly available database of biologically active compounds. However, because the authors had used this database as a training set for their model, these predictions were not really indicative of the model's effectiveness, and so were not considered further.

A further 348 of the remaining 1,241 predictions were found in other databases (which the authors hadn't used as training sets), leaving 893 predictions, 694 of which were then tested experimentally. The authors found that 151 of these predicted drug–target interactions were genuine. So, of the 1,241 predictions not in ChEMBL, 499 were true; 543 were false; and 199 remain to be tested. Many of the newly discovered drug–target interactions would not have been predicted using conventional computational methods that calculate the strength of drug–target binding interactions based on the structures of the ligand and of the target's binding site.

Now, some of their predictions have turned out to be surprising and accurate. Their technique identified chlorotrianisene, for example, as a COX-1 inhibitor, and tests show that it seems to be, which wasn't known at all. The classic antihistamine diphenhydramine turns out to be active at the serotonin transporter. It's also interesting to see what known drugs light up the side effect assays the worst. Looking at their figures, it would seem that the topical antiseptic chlorhexidine (a membrane disruptor) is active all over the place. Another guanidine-containing compound, tegaserod, is also high up the list. Other promiscuous compounds are the old antipsychotic fluspirilene and the semisynthetic antibiotic rifaximin. (That last one illustrates one of the problems with this approach, which the authors take care to point out: toxicity depends on exposure. The dose makes the poison, and all that. Rifaximin is very poorly absorbed, and it would take very unusual dosing, like with a power drill, to get it to hit targets in a place like the central nervous system, even if this technique flags them).

The biggest problem with this whole approach is also highlighted by the authors, to their credit. You can see from those figures above that about half of the potentially toxic interactions it finds aren't real, and you can be sure that there are plenty of false negatives, too. So this is nowhere near ready to replace real-world testing; nothing is. But where it could be useful is in pointing out things to test with real-world assays, activities that you probably hadn't considered at all.

But the downside of that is that you could end up chasing meaningless stuff that does nothing but put the fear into you and delays your compound's development, too. That split, "stupid delay versus crucial red flag", is at the heart of clinical toxicology, and is the reason it's so hard to make solid progress in this area. So much is riding on these decisions: you could walk away from a compound, never developing one that would go on to clear billions of dollars and help untold numbers of patients. Or you could green-light something that would go on to chew up hundreds of millions of dollars of development costs (and even more in opportunity costs, considering what you could have been working on instead), or even worse, one that makes it onto the market and has to be withdrawn in a blizzard of lawsuits. It brings on a cautious attitude.

Comments (21) + TrackBacks (0) | Category: Drug Development | In Silico | Toxicology

June 5, 2012

Merck Finds Its Phase II Candidates For Sale on the Internet

Email This Entry

Posted by Derek

Via Pharmalot, it appears that a former WuXi employee helped himself to samples of two Merck Phase II clinical candidates that were under evaluation. The samples were then offered for sale.

Here's a link to a Google Translate version of a Chinese news report. It looks like gram quantities were involved, along with NMR spectra, with the compounds being provided to a middleman. It's not clear who bought them from him, but the article gives the impression that someone did, was satisfied with the transaction, and wanted more. But in the meantime, Merck did pick up on an offer made by this middleman to sell one of the compounds online, and immediately went after him, which unraveled the whole scheme. (The machine translation is pretty rocky, but I did appreciate that an idiom came through: it mentions that having these valuable samples in an unlocked cabinet was like putting fish in front of a cat).

I would think that this kind of thing is just the nightmare that WuXi's management fears - and if it isn't, it should be. The cost advantage to doing business with them (and other offshore contract houses) is still real, but not as large as it used to be. Stories like this can close that price gap pretty quickly.

Comments (45) + TrackBacks (0) | Category: Business and Markets | Drug Development | The Dark Side

June 4, 2012

Scaling Up Arteminisin

Email This Entry

Posted by Derek

A recent article in Science illustrates a number of points about drug development and scale-up. It's about artemisinin, the antimalarial. Peter Seeberger, a German professor of chemistry (Max Planck-Potsdam), has worked out what looks like a good set of conditions for a key synthetic step (dihydroartemisinic acid to artemisinin), and would like to see these used on large scale to bring the cost of the drug down.

That sounds like a reasonably simple story, but it isn't. Here are a few of the complications:

But Seeberger's method has yet to prove its mettle. It needs to be scaled up, and he can't say how much prices would come down if it worked. Using it in a large facility would require a massive investment, and so far, nobody has stepped up to the plate. What's more, pharma giant Sanofi will open a brand-new facility later this year to make artemisinin therapies based on Amyris's technology: yeast cells that produce a precursor of the drug. Although Seeberger says his discovery would complement that process, Sanofi says it's too late now to adopt it.

The usual route has been to extract arteminisin from its source, Artemisia annua. That's been quite a boom-and-bust cycle over the years, and the price has never really been steady (or particularly low, either). Amyris worked for some years to engineer yeast to produce artemisinic acid, which can then be extracted and converted into the final drug, and this is what's now being scaled up with Sanofi-Aventis.

That process also uses a photochemical oxidation, but in batch mode. I'm a big fan of flow chemistry, and I've done some flow photochemistry myself, and I can agree that when it's optimized, it can be a great improvement over such batch conditions. Seeberger's method looks promising, but Sanofi isn't ready to retool to use it when they have their current conditions worked out. Things seem to be at an impass:

But what will happen with Seeberger's discovery is still unclear. Sanofi's plant is about to open, and the company isn't going to bet on an entirely new technique that has yet to prove that it can be scaled up. In an e-mail to Science, the company calls Seeberger's solution “a clever approach,” but says that “so far the competitivity of this technique has not been demonstrated.”

The ideal solution would be if other companies adopt the combination of Amyris's yeast cells and Seeberger's method, [Michigan supply-chain expert] Yadav says; “then, the price for the drugs could go down significantly.” But a spokesperson for OneWorld Health, the nonprofit pharmaceutical company that has backed Sanofi's project, says there are no plans to make the yeast cells available to any other party.

Seeberger himself is trying to make something happen:

On 19 April, Seeberger invited interested parties to a meeting in Berlin to explore the options. They included representatives of Artemisia growers and extractors, pharmaceutical companies GlaxoSmithKline and Boehringer Ingelheim, as well as the Clinton Foundation, UNITAID, and the German Agency for International Cooperation. (The Bill and Melinda Gates Foundation canceled at the last minute.) None of the funders wanted to discuss the meeting with Science. Seeberger says he was asked many critical questions—“But then the next day, my phone did not stop ringing.” He is now in discussions with several interested parties, he says.

As I say, I like his chemistry. But I can sympathize with the Sanofi people as well. Retooling a working production route is not something you undertake lightly, and the Seeberger chemistry will doubtless need some engineering along the way to reach its potential. The best solution seems to me to be basically what's happening: Sanofi cranks out the drug using its current process, which should help a great deal with the supply in the short term. Meanwhile, Seeberger tries to get his process ready for the big time, with the help of an industrial partner. I wish him luck, and I hope things don't stall out along the way. More on all this as it develops over the next few months.

Comments (22) + TrackBacks (0) | Category: Drug Development

May 24, 2012

An Oral Insulin Pill?

Email This Entry

Posted by Derek

Bloomberg has an article on Novo Nordisk and their huge ongoing effort to come up with an orally available form of insulin. That's been a dream for a long time now, but it's always been thought to be very close to impossible. The reasons for this are well known: your gut will treat a big protein like insulin pretty much like it treats a hamburger. It'll get digested, chopped into its constituent amino acids, and absorbed as non-medicinally-active bits which are used as raw material once inside the body. That's what digestion is. The gut wall specifically guards against letting large biomolecules through intact.

So you're up against a lot of defenses when you try to make something like oral insulin. Modifying the protein itself to make it more permeable and stable will be a big part of it, and formulating the pill to escape the worst of the gut environments will be another. Even then, you have to wonder about patient-to-patient variability in digestion, intestinal flora, and so on. The dosing is probably going to have to be pretty strict with respect to meals (and the content of those meals).

But insulin dosing is always going to be strict, because there's a narrow window to work in. That's one of the factors that's helped to sink so many other alternative-dosing schemes for it, most famously Pfizer's Exubera. The body's response to insulin in brittle in the extreme. If you take twice as much antihistamine as you should, you may feel funny. If you take twice as much insulin as you should, you're going to be on the floor, and you may stay there.

So I salute Novo Nordisk for trying this. The rewards will be huge if they get it to work, but it's a long way from working just yet.

Comments (32) + TrackBacks (0) | Category: Diabetes and Obesity | Drug Development | Pharmacokinetics

May 23, 2012

Another Vote Against Rhodanines

Email This Entry

Posted by Derek

For those of you who'd had to explain to colleagues (in biology or chemistry) why you're not enthusiastic about the rhodanine compounds that came out of your high-throughput screening effort, there's now another paper to point them to.

The biological activity of compounds possessing a rhodanine moiety should be considered very critically despite the convincing data obtained in biological assays. In addition to the lack of selectivity, unusual structure–activity relationship profiles and safety and specificity problems mean that rhodanines are generally not optimizable.

That's well put, I think, although this has been a subject of debate. I would apply the same language to the other "PAINS" mentioned in the Baell and Holloway paper, which brought together a number of motifs that have set off alarm bells over the years. These structures are guilty until proven innocent. If you have a high-value target and feel that it's worth the time and trouble to prove them so, that may well be the right decision. But if you have something else to advance, you're better off doing so. As I've said here before, ars longa, pecunia brevis.

Comments (3) + TrackBacks (0) | Category: Drug Assays | Drug Development

May 22, 2012

The NIH's Drug Repurposing Initiative: Will It Be a Waste?

Email This Entry

Posted by Derek

The NIH's attempt to repurpose shelved development compounds and other older drugs is underway:

The National Institutes of Health (NIH) today announced a new plan for boosting drug development: It has reached a deal with three major pharmaceutical companies to share abandoned experimental drugs with academic researchers so they can look for new uses. NIH is putting up $20 million for grants to study the drugs.

"The goal is simple: to see whether we can teach old drugs new tricks," said Health and Human Services Secretary Kathleen Sebelius at a press conference today that included officials from Pfizer, AstraZeneca, and Eli Lilly. These companies will give researchers access to two dozen compounds that passed through safety studies but didn't make it beyond mid-stage clinical trials. They shelved the drugs either because they didn't work well enough on the disease for which they were developed or because a business decision sidelined them.

There are plenty more where those came from, and I certainly wish people luck finding uses for them. But I've no idea what the chances for success might be. On the one hand, having a compound that's passed all the preclinical stages of development and has then been into humans is no small thing. On that ever-present other hand, though, randomly throwing these compounds against unrelated diseases is unlikely to give you anything (there aren't enough of them to do that). My best guess is that they have a shot in closely related disease fields - but then again, testing widely might show us that there are diseases that we didn't realized were related to each other.

John LaMattina is skeptical:

Well, the NIH has recently expanded the remit of NCATS. NCATS will now be testing drugs that have been shelved by the pharmaceutical industry for other potential uses. The motivation for this is simple. They believe that these once promising but failed compounds could have other uses that the inventor companies haven’t yet identified. I’d like to reiterate the view of Dr. Vagelos – it’s fairy time again.

My views on this sort of initiative, which goes by a variety of names – “drug repurposing,” “drug repositioning,” “reusable drugs” – have been previously discussed in my blog. I do hope that people can have success in this type of work. But I believe successes are going to be rare.

The big question is, rare enough to count the money and time as wasted, or not? I guess we'll find out. Overall, I'd rather start with a compound that I know does what I want it to do, and then try to turn it into a drug (phenotypic screening). Starting with a compound that you know is a drug, but doesn't necessarily do what you want it to, is going to be tricky.

Comments (33) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development | Drug Industry History

May 21, 2012

A New Way to Kill Amoebas, From An Old Drug

Email This Entry

Posted by Derek

Here's a good example of phenotypic screening coming through with something interesting and worthwhile: they screened against Entamoeba histolytica, the protozooan that causes amoebic dysentery and kills tens of thousands of people every year. (Press coverage here).

It wasn't easy. The organism is an anaerobe, which is a bad fit for most robotic equipment, and engineering a decent readout for the assay wasn't straightforward, either. They did have a good positive control, though - the nitroimidazole drug metronidazole, which is the only agent approved currently against the parasite (and to which it's becoming resistant). A screen of nearly a thousand known drugs and bioactive compounds showed eleven hits, of which one (auranofin) was much more active than metronidazole itself.

Auranofin's an old arthritis drug. It's a believable result, because the compound has also been shown to have activity against trypanosomes, Leishmania parasites, and Plasmodium malaria parasites. This broad-spectrum activity makes some sense when you realize that the drug's main function is to serve as a delivery vehicle for elemental gold, whose activity in arthritis is well-documented but largely unexplained. (That activity is also the basis for persistent theories that arthritis may have an infectious-disease component).

The target in this case may well be arsenite-inducible RNA-associated protein (AIRAP), which was strongly induced by drug treatment. The paper notes that arsenite and auranofin are both known inhibitors of thioredoxin reductase, which strongly suggests that this is the mechanistic target here. The organism's anaerobic lifestyle fits in with that; this enzyme would presumably be its main (perhaps only) path for scavenging reactive oxygen species. It has a number of important cysteine residues, which are very plausible candidates for binding to a metal like gold. And sure enough, auranofin (and two analogs) are potent inhibitors of purified form of the amoeba enzyme.

The paper takes the story all the way to animal models, where auranofin completely outperforms metronidazole. The FDA has now given it orphan-drug status for amebiasis, and the way appears clear for a completely new therapeutic option in this disease. Congratulations to all involved; this is excellent work.

Comments (10) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development | Infectious Diseases

A Molecular Craigslist?

Email This Entry

Posted by Derek

Mat Todd at the University of Sydney (whose open-source drug discovery work on schistosomiasis I wrote about here) has an interesting chemical suggestion. His lab is also involved in antimalarial work (here's an update, for those interested, and I hope to post about this effort more specifically). He's wondering about whether there's room for a "Molecular Craigslist" for efforts like these:

Imagine there is a group somewhere with expertise in making these kinds of compounds, and who might want to make some analogs as part of a student project, in return for collaboration and co-authorship? What about a Uni lab which might be interested in making these compounds as part of an undergrad lab course?

Wouldn’t it be good if we could post the structure of a molecule somewhere and have people bid on providing it? i.e. anyone can bid – commercial suppliers, donators, students?

Is there anything like this? Well, databases like Zinc and Pubchem can help in identifying commercial suppliers and papers/patents where groups have made related compounds, but there’s no tendering process where people can post molecules they want. Science Exchange has, I think, commercial suppliers, but not a facility to allow people to donate (I may be wrong), or people to volunteer to make compounds (rather than be listed as generic suppliers. Presumably the same goes for eMolecules, and Molport?

Is there a niche here for a light client that permits the process I’m talking about? Paste your Smiles, post the molecule, specifying a purpose (optional), timeframe, amount, type of analytical data needed, and let the bidding commence?

The closest thing I can think of is Innocentive, which might be pretty close to what he's talking about. It's reasonably chemistry-focused as well. Any thoughts out there?

Comments (19) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development | Infectious Diseases

May 17, 2012

A Preventative Trial for Alzheimer's: The Right Experiment

Email This Entry

Posted by Derek

Alzheimer's disease is in the news, as the first major preventative drug trial gets underway. I salute the people who have made this happen, because we're bound to learn a lot from the attempt, even while I fear the chances for success are not that good.

A preventative trial for Alzheimer's would, under normal circumstances, be a nightmarish undertaking. The disease is quite variable and comes on slowly, and it's proven very difficult to predict who might start to show symptoms as they age. You'd be looking at dosing a very large number of people (thousands, even tens of thousands?) for a very long time (years, maybe a decade or two?) in order to have a chance at statistical significance. And you would, in the course of things, be giving a lot of drug to a lot of people who (in the end) would have turned out not to need it. No, it's no surprise that no one's gone that route.

But there's a way out of that impasse: find a population with some sort of amyloid-pathway mutation. Now you know exactly who will come down with symptoms, and (unfortunately) you also know that they're going to come down with them earlier and more quickly as well. There are several of these around the world; the "Swedish" and "Dutch" mutations are probably the most famous. There's a Colombian mutation too, with a well-defined patient population that's been studied for years, and that's where this new study will take place.

About 300 people will be given an experimental antibody therapy to amyloid protein, crenezumab. This was developed by AC Immune in Switzerland and licensed to Genentech, and is one of many amyloid-targeted antibodies that have come along over the years. (The best-known is bapineuzumab, currently in Phase III). Genentech (Roche) will be putting up the majority of the money for the trial ($65 million, with $16 million from the NIH and $15 million in private foundation money). Just in passing, weren't some people trying to convince everyone a year ago that it only costs $43 million total to develop a new drug? Har, har.

100 people with the mutation will get the antibody every two weeks, and 100 more will get placebo. There are also 100 non-carriers mixed in, who will all get placebo, because some carriers have indicated that they don't want to know their status. Everyone will go through a continuing battery of cognitive and psychological tests, as well as brain imaging and a great deal of blood work, which (if we're lucky) could furnish tips towards clinical biomarkers for future trials.

So overall, I think that this trial is an excellent idea, and I very much hope that a lot of useful information comes out of it. But I've no firm hopes that it will pan out therapeutically. This will be a direct test of the amyloid hypothesis for Alzheimer's, and although there's a tremendous amount of evidence for that line of thought, there's a lot against it as well. Anyone who really thinks they know what will happen in this situation hasn't thought hard enough about it. But that's the best kind of experiment, isn't it?

Comments (18) + TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | Drug Development

May 11, 2012

Competitive Intelligence: Too Much or Too Little?

Email This Entry

Posted by Derek

Drug companies are very attuned to competitive intelligence. There's a lot of information sloshing around out there, and you'd be wise to pay attention to it. Publications in journals are probably the least of it - by the time something written up for publication from inside a pharma company, it's either about to be on the drugstore shelves or it never will be at all. Patents are far more essential, and if you're going to watch anything, you should watch the patent applications in your field.

But there's more. Meetings are a big source of disclosure, as witness the Wall Street frenzies around ASCO and the like. Talks and posters release information that won't show up in the literature for a long time (if indeed it ever does). And there are plenty of other avenues. The question is, though, how much time and money do you want to spend on this sort of thing?

There are commercial services (such as Integrity) that monitor companies, compounds, and therapeutic areas in this fashion, and they're happy to sell you their services, which are not cheap. But figuring out the cost/benefit ratio isn't easy. My guess is that these things, while useful, can be thought of as insurance. You're paying to make sure that something big doesn't happen that you're unware or (or unaware of in enough time).

So here's a question for the readership: has competitive intelligence ever made a big difference for you? Positive and negative results both welcome; "I'm so glad we found out about X" versus "I really wish we'd known about Y". Any thoughts?

Comments (16) + TrackBacks (0) | Category: Drug Development

May 3, 2012

A Long-Delayed COX2 Issue Gets Settled - For $450 Million?

Email This Entry

Posted by Derek

Has the last shot been fired, very quietly, in the COX-2 discovery wars? Here's the background, in which some readers of this site have probably participated at various times. Once it was worked out that the nonsteroidal antiinflammatory drugs (aspirin, ibuprofen et al.) were inhibitors of the enzyme cyclooxygenase, it began to seem likely that there were other forms of the enzyme as well. But for a while, no one could put their hands on one. That changed in the early 1990s, when Harvey Herschman at UCLA reported the mouse COX2 gene. The human analog was discovered right on the heels of that one, with priority usually given to Dan Simmons of BYU, with Donald Young of the University of Rochester there at very nearly the same time.

The Rochester story is one that many readers will be familiar with. The university, famously, obtained a patent for compounds that exerted a therapeutic effect through inhibition of COX-2, without specifying what compounds those might be. They did not, in fact, have any, nor did they give any hints about what they'd look like, and this is what sank them in the end when the university lost its case against Searle (and its patent) for not fulfilling the "written description" requirement.

But there was legal action on the BYU end of things, too. Simmons and the university filed suit several years ago, saying that Simmons had entered into a contract with Monsanto in 1991 to discover COX2 inhibitors. The suit claimed that Monsanto had (wrongly) advised Simmons not to file for a patent on his discoveries, and had also reversed course, terminating the deal to concentrate on the company's internal efforts instead once it had obtained what it needed from the Simmons work.

That takes us to the tangled origin of the COX2 chemical matter. The progenitor compound is generally taken to be DuP-697, which was discovered and investigated before the COX-2 enzyme was even characterized. The compound had a strong antiinflammatory profile which was nonetheless different from the NSAIDS, which led to strong suspicions that it was indeed acting through the putative "other cyclooxygenase". And so it proved, once the enzyme was discovered, and a look at its structure versus the marketed drugs shows that it was a robust series of structures indeed.

One big difference between the BYU case and the Rochester case was the Simmons did indeed have a contract, and it was breach-of-contract that formed the basis for the suit. The legal maneuverings have been going on for several years now. But now Pfizer has issued a press release saying that they have reached "an amicable settlement on confidential terms". The only real detail given is that they're going to establish the Dan Simmons Chair at BYU in recognition of his work.

But there may be more to it than that. Pfizer has also reported taking a $450 million charge against earnings related to this whole matter, which certainly makes one think of Latin sayings, among them post hoc, ergo propter hoc and especially quid pro quo. We may not ever get the full details, since part of the deal would presumably include not releasing them. But it looks like a substantial sum has changed hands.

Comments (12) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Patents and IP

April 30, 2012

India's First Drug Isn't India's First Drug

Email This Entry

Posted by Derek

There have been a number of headlines the last few days about Ranbaxy's Synriam, an antimalarial that's being touted as the first new drug developed inside the Indian pharma industry (and Ranbaxy as the first Indian company to do it).

But that's not quite true, as this post from The Allotrope makes clear. (Its author, Akshat Rathi, found one of my posts when he started digging into the story). Yes, Synriam is a mixture of a known antimalarial (piperaquine) and arterolane. And arterolane was definitely not discovered in India. It was part of a joint effort from the US, UK, Australia, and Switzerland, coordinated by the Swiss-based Medicines for Malaria Venture.

Ranbaxy did take on the late-stage development of this drug combination, after MMV backed out due to no-so-impressive performance in the clinic. As Rathi puts it:

Although Synriam does not qualify as ‘India’s first new drug’ (because none of its active ingredients were wholly developed in India), Ranbaxy deserves credit for being the first Indian pharmaceutical company to launch an NCE before it was launched anywhere else in the world.

And that's something that not many countries have done. I just wish that Ranbaxy were a little more honest about that in their press release.

Comments (8) + TrackBacks (0) | Category: Drug Development | Infectious Diseases

April 12, 2012

A Federation of Independent Researchers?

Email This Entry

Posted by Derek

I've had an interesting e-mail from a reader who wants to be signed as "Mrs. McGreevy", and it's comprehensive enough that I'm going to reproduce it in full below.

As everyone but the editorial board of C&E News has noticed, jobs in chemistry are few and far between right now. I found your post on virtual biotechs inspiring, but it doesn't look like anyone has found a good solution for how to support these small firefly businesses until they find their wings, so to speak. Lots of editorials, lots of meetings, lots of rueful headshaking, no real road map forward for unemployed scientists.

I haven't seen this proposed anywhere else, so I'm asking you and your readership if this idea would fly:

What about a voluntary association of independent research scientists?

I'm thinking about charging a small membership fee (for non-profit administration and hard costs) and using group buying power for the practical real-world support a virtual biotech would need:

1. Group rates on health and life insurance.

How many would-be entrepreneurs are stuck in a job they hate because of the the health care plan, or even worse, are unemployed or underemployed and uninsurable, quietly draining their savings accounts and praying no one gets really sick? I have no idea how this would work across state lines, or if it is even possible,but would it hurt to find out? Is anyone else looking?

2. Group rates on access to journals and library services.

This is something I do know a bit about. My M.S. is in library science, and I worked in the Chemistry Library in a large research institution for years during grad school. What if there were one centralized virtual library to which unaffiliated researchers across the country could log in for ejournal access? What if one place could buy and house the print media that start-ups would need to access every so often, and provide a librarian to look things up-- it's not like everyone needs their own print copy of the Canada & US Drug Development Industry & Outsourcing Guide 2012 at $150 a pop. (But if 350 people paid $1 a year for a $350/yr online subscription . . . )

Yes, some of you could go to university libraries and look these things up and print off articles to read at home, but some of you can't. You're probably violating some sort of terms of service agreement the library and publisher worked out anyway. It's not like anyone is likely to bust you unless you print out stacks and stacks of papers, but still. It's one more hassle for a small company to deal with, and everyone will have to re-invent the wheel and waste time and energy negotiating access on their own.

3. How about an online community for support and networking-- places for blogs, reviews, questions, answers, exchanges of best practices, or even just encouragement for that gut-wrenching feeling of going out on your own as a new entrepreneur?

4. What sort of support for grantwriting is out there? Is there a hole that needs to be filled?

5. How about a place to advertise your consulting services or CRO, or even bid for a contract? Virtual RFP posting?

6. Would group buying power help negotiate rates with CROs? How about rates for HTS libraries, for those of you who haven't given up on it completely?

Is there a need for this sort of thing? Would anyone use it if it were available? How much would an unaffiliated researcher be willing to pay for the services? Does anyone out there have an idea of what sort of costs are involved, and what sort of critical mass it would take to achieve the group buying power needed to make this possible?

I'd be happy to spark a discussion on what a virtual biotech company needs besides a spare bedroom and a broadband connection, even if the consensus opinion is that the OP an ill-informed twit with an idea that will never fly. What do you need to get a virtual biotech started? How do we make it happen? There are thousands of unemployed lab scientists, and I refuse to believe that the only guy making a living these days from a small independently-funded lab is Bryan Cranston.

A very worthy topic indeed, and one whose time looks to have come. Thoughts on how to make such a thing happen?

Comments (59) + TrackBacks (0) | Category: Business and Markets | Drug Development | General Scientific News | The Scientific Literature

April 4, 2012

The Artificial Intelligence Economy?

Email This Entry

Posted by Derek

Now here's something that might be about to remake the economy, or (on the other robotic hand) it might not be ready to just yet. And it might be able to help us out in drug R&D, or it might turn out to be mostly beside the point. What the heck am I talking about, you ask? The so-called "Artificial Intelligence Economy". As Adam Ozimek says, things are looking a little more futuristic lately.

He's talking about things like driverless cars and quadrotors, and Tyler Cowen adds the examples of things like Apple's Siri and IBM's Watson, as part of a wider point about American exports:

First, artificial intelligence and computing power are the future, or even the present, for much of manufacturing. It’s not just the robots; look at the hundreds of computers and software-driven devices embedded in a new car. Factory floors these days are nearly empty of people because software-driven machines are doing most of the work. The factory has been reinvented as a quiet place. There is now a joke that “a modern textile mill employs only a man and a dog—the man to feed the dog, and the dog to keep the man away from the machines.”

The next steps in the artificial intelligence revolution, as manifested most publicly through systems like Deep Blue, Watson and Siri, will revolutionize production in one sector after another. Computing power solves more problems each year, including manufacturing problems.

Two MIT professors have written a book called Race Against the Machine about all this, and it appears to be sort of a response to Cowen's earlier book The Great Stagnation. (Here's an article of theirs in The Atlantic making their case).

One of the export-economy factors that it (and Cowen) bring up is that automation makes a country's wages (and labor costs in general) less of a factor in exports, once you get past the capital expenditure. And as the size of that expenditure comes down, it becomes easier to make that leap. One thing that means, of course, is that less-skilled workers find it harder to fit in. Here's another Atlantic article, from the print magazine, which looked at an auto-parts manufacturer with a factory in South Carolina (the whole thing is well worth reading):

Before the rise of computer-run machines, factories needed people at every step of production, from the most routine to the most complex. The Gildemeister (machine), for example, automatically performs a series of operations that previously would have required several machines—each with its own operator. It’s relatively easy to train a newcomer to run a simple, single-step machine. Newcomers with no training could start out working the simplest and then gradually learn others. Eventually, with that on-the-job training, some workers could become higher-paid supervisors, overseeing the entire operation. This kind of knowledge could be acquired only on the job; few people went to school to learn how to work in a factory.
Today, the Gildemeisters and their ilk eliminate the need for many of those machines and, therefore, the workers who ran them. Skilled workers now are required only to do what computers can’t do (at least not yet): use their human judgment.

But as that article shows, more than half the workers in that particular factory are, in fact, rather unskilled, and they make a lot more than their Chinese counterparts do. What keeps them employed? That calculation on what it would take to replace them with a machine. The article focuses on one of those workers in particular, named Maddie:

It feels cruel to point out all the Level-2 concepts Maddie doesn’t know, although Maddie is quite open about these shortcomings. She doesn’t know the computer-programming language that runs the machines she operates; in fact, she was surprised to learn they are run by a specialized computer language. She doesn’t know trigonometry or calculus, and she’s never studied the properties of cutting tools or metals. She doesn’t know how to maintain a tolerance of 0.25 microns, or what tolerance means in this context, or what a micron is.

Tony explains that Maddie has a job for two reasons. First, when it comes to making fuel injectors, the company saves money and minimizes product damage by having both the precision and non-precision work done in the same place. Even if Mexican or Chinese workers could do Maddie’s job more cheaply, shipping fragile, half-finished parts to another country for processing would make no sense. Second, Maddie is cheaper than a machine. It would be easy to buy a robotic arm that could take injector bodies and caps from a tray and place them precisely in a laser welder. Yet Standard would have to invest about $100,000 on the arm and a conveyance machine to bring parts to the welder and send them on to the next station. As is common in factories, Standard invests only in machinery that will earn back its cost within two years. For Tony, it’s simple: Maddie makes less in two years than the machine would cost, so her job is safe—for now. If the robotic machines become a little cheaper, or if demand for fuel injectors goes up and Standard starts running three shifts, then investing in those robots might make sense.

At this point, some similarities to the drug discovery business will be occurring to readers of this blog, along with some differences. The automation angle isn't as important, or not yet. While pharma most definitely has a manufacturing component (and how), the research end of the business doesn't resemble it very much, despite numerous attempts by earnest consultants and managers to make it so. From an auto-parts standpoint, there's little or no standardization at all in drug R&D. Every new drug is like a completely new part that no one's ever built before; we're not turning out fuel injectors or alternators. Everyone knows how a car works. Making a fundamental change in that plan is a monumental challenge, so the auto-parts business is mostly about making small variations on known components to the standards of a given customer. But in pharma - discovery pharma, not the generic companies - we're wrenching new stuff right out of thin air, or trying to.

So you'd think that we wouldn't be feeling the low-wage competitive pressure so much, but as the last ten years have shown, we certainly are. Outsourcing has come up many a time around here, and the very fact that it exists shows that not all of drug research is quite as bespoke as we might think. (Remember, the first wave of outsourcing, which is still very much a part of the business, was the move to send the routine methyl-ethyl-butyl-futile analoging out somewhere cheaper). And this takes us, eventually, to the Pfizer-style split between drug designers (high-wage folks over here) and the drug synthesizers (low-wage folks over there). Unfortunately, I think that you have to go the full reducio ad absurdum route to get that far, but Pfizer's going to find out for us if that's an accurate reading.

What these economists are also talking about is, I'd say, the next step beyond Moore's Law: once we have all this processing power, how do we use it? The first wave of computation-driven change happened because of the easy answers to that question: we had a lot of number-crunching that was being done by hand, or very slowly by some route, and we now had machines that could do what we wanted to do more quickly. This newer wave, if wave it is, will be driven more by software taking advantage of the hardware power that we've been able to produce.

The first wave didn't revolutionize drug discovery in the way that some people were hoping for. Sheer brute force computational ability is of limited use in drug discovery, unfortunately, but that's not always going to be the case, especially as we slowly learn how to apply it. If we really are starting to get better at computational pattern recognition and decision-making algorithms, where could that have an impact?

It's important to avoid what I've termed the "Andy Grove fallacy" in thinking about all this. I think that it is a result of applying first-computational-wave thinking too indiscriminately to drug discovery, which means treating it too much like a well-worked-out human-designed engineering process. Which it certainly isn't. But this second-wave stuff might be more useful.

I can think of a few areas: in early drug discovery, we could use help teasing patterns out of large piles of structure-activity relationship data. I know that there are (and have been) several attempts at doing this, but it's going to be interesting to see if we can do it better. I would love to be able to dump a big pile of structures and assay data points into a program and have it say the equivalent of "Hey, it looks like an electron-withdrawing group in the piperidine series might be really good, because of its conformational similarity to the initial lead series, but no one's ever gotten back around to making one of those because everyone got side-tracked by the potency of the chiral amides".

Software that chews through stacks of PK and metabolic stability data would be worth having, too, because there sure is a lot of it. There are correlations in there that we really need to know about, that could have direct relevance to clinical trials, but I worry that we're still missing some of them. And clinical trial data itself is the most obvious place for software that can dig through huge piles of numbers, because those are the biggest we've got. From my perspective, though, it's almost too late for insights at that point; you've already been spending the big money just to get the numbers themselves. But insights into human toxicology from all that clinical data, that stuff could be gold. I worry that it's been like the concentration of gold in seawater, though: really there, but not practical to extract. Could we change that?

All this makes me actually a bit hopeful about experiments like this one that I described here recently. Our ignorance about medicine and human biochemistry is truly spectacular, and we need all the help we can get in understanding it. There have to be a lot of important things out there that we just don't understand, or haven't even realized the existence of. That lack of knowledge is what gives me hope, actually. If we'd already learned what there is to know about discovering drugs, and were already doing the best job that could be done, well, we'd be in a hell of a fix, wouldn't we? But we don't know much, we're not doing it as well as we could, and that provides us with a possible way out of the fix we're in.

So I want to see as much progress as possible in the current pattern-recognition and data-correlation driven artificial intelligence field. We discovery scientists are not going to automate ourselves out of business so quickly as factory workers, because our work is still so hypothesis-driven and hard to define. (For a dissenting view, with relevance to this whole discussion, see here). It's the expense of applying the scientific method to human health that's squeezing us all, instead, and if there's some help available in that department, then let's have it as soon as possible.

Comments (32) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | In Silico | Pharmacokinetics | Toxicology

March 28, 2012

Winning Ugly and Failing Gracefully

Email This Entry

Posted by Derek

A recent discussion with colleagues turned around the question: "Would you rather succeed ugly or fail gracefully?" In drug discovery terms, that could be rephrased "Would you rather get a compound through the clinic after wrestling with a marginal structure, worrying about tox, having to fix the formulation three times, and so on, or would you rather work on something that everyone agrees is a solid target, with good chemical matter, SAR that makes sense, leading to a potent, selective, clean compound that dies anyway in Phase II?"

I vote for option number one, if those are my choices. But here's the question at the heart of a lot of the debates about preclinical criteria: do more programs like that die, or do more programs like option number two die? I tend to think that way back early in the process, when you're still picking leads, that you're better off with non-ugly chemical matter. We're only going to make it bigger and greasier, so start with as pretty a molecule as you can. But as things go on, and as you get closer to the clinic, you have to face up to the fact that no matter how you got there, no one really knows what's going to happen once you're in humans. You don't really know if your mechanism is correct (Phase II), and you sure don't know if you're going to see some sort of funny tox or long-term effect (Phase III). The chances of those are still higher if your compound is exceptionally greasy, so I think that everyone can agree that (other things being equal) you're better off with a lower logP. But what else can you trust? Not much.

The important thing is getting into the clinic, because that's where all the big questions are answered. And it's also where the big money is spent, so you have to be careful, on the other side of the equation, and not just shove all kinds of things into humans. You're going to run out of time and cash, most likely, before something works. But if you kill everything off before it gets that far, you're going to run out of both of those, too, for sure. You're going to have to take some shots at some point, and those will probably be with compounds that are less than ideal. A drug is a biologically active chemical compound that has things wrong with it.

There's another component to that "fail gracefully" idea, though, and it's a less honorable one. In a large organization, it can be to a person's advantage to make sure that everything's being done in the approved way, even if that leads off the cliff eventually. At least that way you can't be blamed, right? So you might not think that an inhibitor of Target X is such a great idea, but the committee that proposes new targets does, so you keep your head down. And you may wonder about the way the SAR is being prosecuted, but the official criteria say that you have to have at least so much potency and at least so much selectivity, so you do what you have to to make the cutoffs. And on it goes. In the end, you deliver a putative clinical candidate that may not have much of a chance at all, but that's not your department, because all the boxes got checked. More to the point, all the boxes were widely seen to be checked. So if it fails, well, it's just one of those things. Everyone did everything right, everyone met the departmental goals: what else can you do?

This gets back to the post the other day on unlikely-looking drug structures. There are a lot of them; I'll put together a gallery soon. But I think it's important to look these things over, and to realize that every one of them is out there on the market. They're on the pharmacy shelves because someone had the nerve to take them into the clinic, because someone was willing to win with an ugly compound. Looking at them, I realize that I would have crossed off billions of dollars just because I didn't feel comfortable with these structures, which makes me wonder if I haven't been overvaluing my opinion in these matters. You can't get a drug on the market without offending someone, and it may be you.

Comments (36) + TrackBacks (0) | Category: Drug Development | Life in the Drug Labs

March 27, 2012

Virtual Biotech, Like It or Not

Email This Entry

Posted by Derek

We've all been hearing for a while about "virtual biotechs". The term usually refers to a company with only a handful of employees and no real laboratory space of its own. All the work is contracted out. That means that what's left back at the tiny headquarters (which in a couple of cases is as small as one person's spare bedroom) is the IP. What else could it be? There's hardly any physical property at all. It's as pure a split as you can get between intellectual property (ideas, skills, actual patents) and everything else. Here's a 2010 look at the field in San Diego, and here's a more recent look from Xconomy. (I last wrote about the topic here).

Obviously, this gets easier to do earlier in the whole drug development process, where less money is involved. That said, there are difficulties at both ends. A large number of these stories seem to involve people who were at a larger company when it ran out of money, but still had some projects worth looking at. The rest of the cases seem to come out of academia. In other words, the ideas themselves (the key part of the whole business) were generated somewhere with more infrastructure and funding. Trying to get one of these off the ground otherwise would be a real bootstrapping problem.

And at the other end of the process, getting something all the way through the clinic like this also seems unlikely. The usual end point is licensing out to someone with more resources, as this piece from Xconomy makes clear:

In the meantime, one biotech model gaining traction is the single asset, infrastructure-lite, development model, which deploys modest amounts of capital to develop a single compound to an early clinical data package which can be partnered with pharma. The asset resides within an LLC, and following the license transaction, the LLC is wound down and distributes the upfront, milestone and royalty payments to the LLC members on a pro rata basis. The key to success in this model is choosing the appropriate asset/indication – one where it is possible to get to a clinical data package on limited capital. This approach excludes many molecules and indications often favored by biotech, and tends to drive towards clinical studies using biomarkers – directly in line with one of pharma’s favored strategies.

This is a much different model, of course, than the "We're going to have an IPO and become our own drug company!" one. But the chances of that happening have been dwindling over the years, and the current funding environment makes it harder than ever, Verastem aside. It's even a rough environment to get acquired in. So licensing is the more common path, and (as this FierceBiotech story says), that's bound to have an effect on the composition of the industry. People aren't holding on to assets for as long as they used to, and they're trying to get by with as little of their own money as they can. Will we end up with a "field of fireflies" model, with dozens, hundreds of tiny companies flickering on and off? What will the business look like after another ten years of this - better, or worse?

Comments (26) + TrackBacks (0) | Category: Business and Markets | Chemical News | Drug Development | Drug Industry History

March 26, 2012

What's the Ugliest Drug? Or The Ugliest Drug Candidate?

Email This Entry

Posted by Derek

I was having one of those "drug-like properties" discussions with colleagues the other day. Admittedly, if you're not in drug discovery yourself, you probably don't have that one very often, but even for us, you'd think that a lot of the issues would be pretty settled by now. Not so.

While everyone broadly agrees that compounds shouldn't be too large or too greasy, where one draws the line is always up for debate. And the arguments gets especially fraught in the earlier stages of a project, when you're still deciding on what chemical series to work on. One point of view (the one I subscribe to) says that almost every time, the medicinal chemistry process is going to make your compound larger and greasier, so you'd better start on the smaller and leaner side to give everyone room to work in. But sometimes, Potency Rules, at least for some people and in some organizations, and there's a lead which might be stretching some definitions but is just too active to ignore. (That way, in my experience, lies heartbreak, but there are people who've made successes out of it).

We've argued these same questions here before, more than once. What I'm wondering today is, what's the least drug-like drug that's made it? It's dangerous to ask that question, in a way, because it gives some people what they see as a free pass to pursue ugly chemical matter - after all, Drug Z made it, so why not this one? (That, to my mind, ignores the ars longa, vita brevis aspect: since there's an extra one-in-a-thousand factor with some compounds, given the long odds already, why would you make them even longer?)

But I think it's still worth asking the question, if we can think of what extenuating circumstances made some of these drugs successful. "Sure, your molecular weight isn't as high as Drug Z, which is on the market, but do you have Drug Z's active transport/distribution profile/PK numbers in mice? If not, just why do you think you're going to be so lucky?"

Antibiotics are surely going to make up some of the top ten candidates - some of those structures are just bizarre. There's a fairly recent oncology drug that I think deserves a mention for its structure, too. Anyone have a weirder example of a marketed drug?

What's still making its way through the clinic can be even stranger-looking. Some of the odder candidates I've seen recently have been for the hepatitis C proteins NS5A and NS5B. Bristol-Myers Squibb has disclosed some eye-openers, such as BMS-790052. (To be fair, that target seems to really like chemical matter like this, and the compound, last I heard, was moving along through the clinic.)

And yesterday, as Carmen Drahl reported from the ACS meeting in San Diego, the company disclosed the structure of BMS-791325, a compound targeting NS5B. That's a pretty big one, too - the series it came from started out reasonably, then became not particularly small, and now seems to have really bulked up, and for the usual reasons - potency and selectivity. But overall, it's a clear example of the sort of "compound bloat" that overtakes projects as they move on.

So, nominations are open for three categories: Ugliest Marketed Drug, Ugliest Current Clinical Candidate, and Ugliest Failed Clinical Candidate. Let's see how bad it gets!

Comments (58) + TrackBacks (0) | Category: Drug Development | Drug Industry History

March 19, 2012

Dealing with the Data

Email This Entry

Posted by Derek

So how do we deal with the piles of data? A reader sent along this question, and it's worth thinking about. Drug research - even the preclinical kind - generates an awful lot of information. The other day, it was pointed out that one of our projects, if you expanded everything out, would be displayed on a spreadsheet with compounds running down the left, and over two hundred columns stretching across the page. Not all of those are populated for every compound, by any means, especially the newer ones. But compounds that stay in the screening collection tend to accumulate a lot of data with time, and there are hundreds of thousands (or millions) of compounds in a good-sized screening collection. How do we keep track of it all?

Most larger companies have some sort of proprietary software for the job (or jobs). The idea is that you can enter a structure (or substructure) of a compound and find out the project it was made for, every assay that's been run on it, all its spectral data and physical properties (experimental and calculated), every batch that's been made or bought (and from whom and from where, with notebook and catalog references), and the bar code of every vial or bottle of it that's running around the labs. You obviously don't want all of those every time, so you need to be able to define your queries over a wide range, setting a few common ones as defaults and customizing them for individual projects while they're running.

Displaying all this data isn't trivial, either. The good old fashioned spreadsheet is perfectly useful, but you're going to need the ability to plot and chart in all sorts of ways to actually see what's going on in a big project. How does human microsomal stability relate to the logP of the right-hand side chain in the pyrimidinyl-series compounds with molecular weight under 425? And how do those numbers compare to the dog microsomes? And how do either of those compare to the blood levels in the whole animal, keeping in mind that you've been using two different dosing vehicles along the way? To visualize these kinds of questions - perfectly reasonable ones, let me tell you - you'll need all the help you can get.

You run into the problem of any large, multifunctional program, though: if it can do everything, it may not do any one thing very well. Or there may be a way to do whatever you want, if only you can memorize the magic spell that will make it happen. If it's one of those programs that you have to use constantly or run the risk of totally forgetting how it goes, there will be trouble.

So what's been the experience out there? In-house home-built software? Adaptations of commercial packages? How does a smaller company afford to do what it needs to do? Comments welcome. . .

Comments (66) + TrackBacks (0) | Category: Drug Assays | Drug Development | Life in the Drug Labs

March 16, 2012

Merck's CALIBR Venture

Email This Entry

Posted by Derek

So the news is that Merck is now going to start its own nonprofit drug research institute in San Diego: CALIBR, the California Institute for Biomedical Research. It'll be run by Peter Schultz of Scripps, and they're planning to hire about 150 scientists (which is good news, anyway, since the biomedical employment picture out in the San Diego area has been grim).

Unlike the Centers for Therapeutic Innovation that Pfizer, a pharmaceutical company based in New York, has established in collaboration with specific academic medical centres around the country, Calibr will not be associated with any particular institution. (Schultz, however, will remain at Scripps.) Instead, academics from around the world can submit research proposals, which will then be reviewed by a scientific advisory board, says Kim. The institute itself will be overseen by a board of directors that includes venture capitalists. Calibr will not have a specific therapeutic focus.

Merck, meanwhile, will have the option of an exclusive licence on any proteins or small-molecule therapeutics to emerge. . .

They're putting up $90 million over the next 7 years, which isn't a huge amount. It's not clear if they have any other sources of funding - they say that they'll "access" such, but I have to wonder, since that would presumably complicate the IP for Merck. It's also not clear what they'll be working on out there; the press release is, well, a press release. The general thrust is translational research, a roomy category, and they'll be taking proposals from academic labs who would like to use their facilities and expertise.

So is this mainly a way for Merck to do more academic collaborations without the possible complications (for universities) of dealing directly with a drug company? Will it preferentially take on high-risk, high-reward projects? There's too little to go on yet. Worth watching with interest as it gets going - and if any readers find themselves interviewing there, please report back!

Comments (47) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development

March 15, 2012

Not Quite So Accelerated, Says PhRMA

Email This Entry

Posted by Derek

We've spent a lot of time here talking about provisional approval of drugs, most specifically Avastin (when its approval for metastatic breast cancer was pulled). But the idea isn't to put drugs on the market that have to be taken back; it's to get them out more quickly in case they actually work.

There's legislation (the TREAT Act) that is attempting to extend the range of provisional approvals. But according to this column by Avik Roy, an earlier version of the bill went much further: it would have authorized new approval pathways for the first drugs to treat specific subpopulations of an existing disease, nonresponders to existing therapies, compounds with demonstrable improvements in safety or efficacy, or (in general) compounds that "otherwise satisfy an unmet medical need". As with the existing accelerated approval process, drugs under these categories could (after negotiation with the FDA) be provisionally marketed after Phase II results, if those were convincing enough, with possible revocation after Phase III results came in.

Unlike the various proposals to put compounds on the market after Phase I (which I fear would be an invitation to game the system), this one strikes me as aggressive but sensible. It would, ideally, encourage companies to run more robust Phase II trials in the hopes of going straight to the market, and it would allow really outstanding drugs a chance to start earning back their R&D costs much earlier. As long as everyone understood that Phase III trials are no slam dunk any more (if they ever were), and that some of these drugs would turn out not to be as good as they looked, I think that on balance, everyone would come out ahead.

According to Roy, this version of the bill had (as you'd expect) attracted strong backers and strong opponents. On the "pro" side was BIO, the biotech industry group, which is no surprise. On the "anti" side, the FDA itself wasn't ready for this big a change, which isn't much of a shock, either. (To be fair to them, this would increase their workload substantially - you'd really want to couple a reform like this with more people on their end). And there were advocacy groups that worried that this new regulatory regime would water down drug safety requirements too much. The article doesn't name any groups, but anyone who's observed the industry can fill in some likely names.

But there was another big group opposing the change: PhRMA. Yes, the trade organization for the large drug companies. Opinions vary as to the reason. The official explanations are that they, too, were concerned for patient safety, and they wanted the PDUFA legislation renewed as is, without these extra provisions (a "bird in the hand" argument). But Roy's piece advances a less charitable thesis:

Sen. Hagan’s proposal would have been devastating to the big pharma R&D oligopoly. If small biotech companies could get their drugs tentatively approved after inexpensive phase II studies, they would have far less need to partner those drugs with big pharma. They could keep the upside themselves and attract far more interest from investors. Big pharma, on the other hand, would be without its largest source for innovative new medicines: the small biotech farm team.

I'd like to be able to doubt this reasoning more than I do. . .

Comments (19) + TrackBacks (0) | Category: Drug Development | Regulatory Affairs

March 14, 2012

The Blackian Demon of Drug Discovery

Email This Entry

Posted by Derek

There's an on-line appendix to that Nature Reviews Drug Discovery article that I've been writing about, and I don't think that many people have read it yet. Jack Scannell, one of the authors, sent along a note about it, and he's interested to see what the readership here makes of it.

It gets to the point that came up in the comments to this post, about the order that you do your screening assays in (see #55 and #56). Do you run everything through a binding assay first, or do you run things through a phenotypic assay first and then try to figure out how they bind? More generally, with either sort of assay, is it better to do a large random screen first off, or is it better to do iterative rounds of SAR from a smaller data set? (I'm distinguishing those two because phenotypic assays provide very different sorts of data density than do focused binding assays).

Statistically, there's actually a pretty big difference there. I'll quote from the appendix:

Imagine that you know all of the 600,000 or so words in the English language and that you are asked to guess an English word written in a sealed envelope. You are offered two search strategies. The first is the familiar ‘20 questions’ game. You can ask a series of questions. You are provided with a "yes" or "no" answer to each, and you win if you guess the word in the envelope having asked 20 questions or fewer. The second strategy is a brute force method. You get 20,000 guesses, but you only get a "yes" or "no" once you have made all 20,000 guesses. So which is more likely to succeed, 20 questions or 20,000 guesses?

A skilled player should usually succeed with 20 questions (since 600,000 is less than than 2^20) but would fail nearly 97% of the time with "only" 20,000 guesses.

Our view is that the old iterative method of drug discovery was more like 20 questions, while HTS of a static compound library is more like 20,000 guesses. With the iterative approach, the characteristics of each molecule could be measured on several dimensions (for example, potency, toxicity, ADME). This led to multidimensional structure–activity relationships, which in turn meant that each new generation of candidates tended to be better than the previous generation. In conventional HTS, on the other hand, search is focused on a small and pre-defined part of chemical space, with potency alone as the dominant factor for molecular selection.

Aha, you say, but the game of twenty questions is equivalent to running perfect experiments each time: "Is the word a noun? Does it have more than five letters?" and so on. Each question carves up the 600,000 word set flawlessly and iteratively, and you never have to backtrack. Good experimental design aspires to that, but it's a hard standard to reach. Too often, we get answers that would correspond to "Well, it can be used like a noun on Tuesdays, but if it's more than five letters, then that switches to Wednesday, unless it starts with a vowel".

The authors try to address this multi-dimensionality with a thought experiment. Imagine chemical SAR space - huge number of points, large number of parameters needed to describe each point.

Imagine we have two search strategies to find the single best molecule in this space. One is a brute force search, which assays a molecule and then simply steps to the next molecule, and so exhaustively searches the entire space. We call this "super-HTS". The other, which we call the “Blackian demon” (in reference to the “Darwinian demon”, which is used sometimes to reflect ideal performance in evolutionary thought experiments, and in tribute to James Black, often acknowledged as one of the most successful drug discoverers), is equivalent to an omniscient drug designer who can assay a molecule, and then make a single chemical modification to step it one position through chemical space, and who can then assay the new molecule, modify it again, and so on. The Blackian demon can make only one step at a time, to a nearest neighbour molecule, but it always steps in the right direction; towards the best molecule in the space. . .

The number of steps for the Blackian demon follows from simple geometry. If you have a d dimensional space with n nodes in the space, and – for simplicity – these are arranged in a neat line, square, cube, or hypercube, you can traverse the entire space, from corner to corner with d x (n^(1/d)-1) steps. This is because each vertex is n nodes in length, and there are d vertices. . .When the search space is high dimensional (as is chemical space) and there is a very large number of nodes (as is the case for drug-like molecules), the Blackian demon is many orders of magnitude more efficient than super-HTS. For example, in a 10 dimensional space with 10^40 molecules, the Blackian demon can search the entire space in 10^5 steps (or less), while the brute force method requires 10^40 steps.

These are idealized cases, needless to say. One problem is that none of us are exactly Blackian demons - what if you don't always make the right step to the next molecule? What if your iteration only gives one out of ten molecules that get better, or one out of a hundred? I'd be interested to see how that affects the mathematical argument.

And there's another conceptual problem: for many points in chemical space, the numbers are even much more sparse. One assumption with this thought experiment (correct me if I'm wrong) is that there actually is a better node to move to each time. But for any drug target, there are huge regions of flat, dead, inactive, un-assayable chemical space. If you started off in one of those, you could iterate until your hair fell out and never get out of the hole. And that leads to another objection to the ground rules of this exercise: no one tries to optimize by random HTS. It's only used to get starting points for medicinal chemists to work on, to make sure that they're not starting in one of those "dead zones". Thoughts?

Comments (45) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

March 12, 2012

The Brute Force Bias

Email This Entry

Posted by Derek

I wanted to return to that Nature Reviews Drug Discovery article I blogged about the other day. There's one reason the authors advance for our problems that I thought was particularly well stated: what they call the "basic research/brute force" bias.

The ‘basic research–brute force’ bias is the tendency to overestimate the ability of advances in basic research (particularly in molecular biology) and brute force screening methods (embodied in the first few steps of the standard discovery and preclinical research process) to increase the probability that a molecule will be safe and effective in clinical trials. We suspect that this has been the intellectual basis for a move away from older and perhaps more productive methods for identifying drug candidates. . .

I think that this is definitely a problem, and it's a habit of thinking that almost everyone in the drug research business has, to some extent. The evidence that there's something lacking has been piling up. As the authors say, given all the advances over the past thirty years or so, we really should have seen more of an effect in the signal/noise of clinical trials: we should have had higher success rates in Phase II and Phase III as we understood more about what was going on. But that hasn't happened.

So how can some parts of a process improve dramatically, yet important measures of overall performance remain flat or decline? There are several possible explanations, but it seems reasonable to wonder whether companies industrialized the wrong set of activities. At first sight, R&D was more efficient several decades ago , when many research activities that are today regarded as critical (for example, the derivation of genomics-based drug targets and HTS) had not been invented, and when other activities (for example, clinical science, animal-based screens and iterative medicinal chemistry) dominated.

This gets us back to a topic that's come up around here several times: whether the entire target-based molecular-biology-driven style of drug discovery (which has been the norm since roughly the early 1980s) has been a dead end. Personally, I tend to think of it in terms of hubris and nemesis. We convinced ourselves that were were smarter than we really were.

The NRDD piece has several reasons for this development, which also ring true. Even in the 1980s, there were fears that the pace of drug discovery was slowing. and a new approach was welcome. A second reason is a really huge one: biology itself has been on a reductionist binge for a long time now. And why not? The entire idea of molecular biology has been incredibly fruitful. But we may be asking more of it than it can deliver.

. . .the ‘basic research–brute force’ bias matched the scientific zeitgeist, particularly as the older approaches for early-stage drug R&D seemed to be yielding less. What might be called 'molecular reductionism' has become the dominant stream in biology in general, and not just in the drug industry. "Since the 1970s, nearly all avenues of biomedical research have led to the gene". Genetics and molecular biology are seen as providing the 'best' and most fundamental ways of understanding biological systems, and subsequently intervening in them. The intellectual challenges of reductionism and its necessary synthesis (the '-omics') appear to be more attractive to many biomedical scientists than the messy empiricism of the older approaches.

And a final reason for this mode of research taking over - and it's another big one - is that it matched the worldview of many managers and investors. This all looked like putting R&D on a more scientific, more industrial, and more manageable footing. Why wouldn't managers be attracted to something that looked like it valued their skills? And why wouldn't investors be attracted to something that looked as if it could deliver more predictable success and more consistent earnings? R&D will give you gray hairs; anything that looks like taming it will find an audience.

And that's how we find ourselves here:

. . .much of the pharmaceutical industry's R&D is now based on the idea that high-affinity binding to a single biological target linked to a diseases will lead to medical benefit in humans. However, if the causal link between single targets and disease states is weaker than commonly thought, or if drugs rarely act on a single target, one can understand why the molecules that have been delivered by this research strategy into clinical development may not necessarily be more likely to succeed than those in earlier periods.

That first sentence is a bit terrifying. You read it, and part of you thinks "Well, yeah, of course", because that is such a fundamental assumption of almost all our work. But what if it's wrong? Or just not right enough?

Comments (64) + TrackBacks (0) | Category: Drug Development | Drug Industry History

March 6, 2012

Drug Discovery for Physicists

Email This Entry

Posted by Derek

There's a good post over at the Curious Wavefunction on the differences between drug discovery and the more rigorous sciences. I particularly liked this line:

The goal of many physicists was, and still is, to find three laws that account for at least 99% of the universe. But the situation in drug discovery is more akin to the situation in finance described by the physicist-turned-financial modeler Emanuel Derman; we drug hunters would consider ourselves lucky to find 99 laws that describe 3% of the drug discovery universe.

That's one of the things that you get used to in this field, but when you step back, it's remarkable: so much of what we do remains relentlessly empirical. I don't just mean finding a hit in a screening assay. It goes all the way through the process, and the further you go, the more empirical it gets. Cell assays surprise you compared to enzyme preps, and animals are a totally different thing than cells. Human clinical trials are the ultimate in empirical data-gathering: there's no other way to see if a drug is truly safe (or effective) in humans other than giving it to a whole big group of humans. We do all sorts of assays to avoid getting to that stage, or to feel more confident when we're about to make it there, but there's no substituted for actually doing it.

There's a large point about reductionism to be made, too:

Part of the reason drug discovery can be challenging to physicists is because they are steeped in a culture of reductionism. Reductionism is the great legacy of twentieth-century physics, but while it worked spectacularly well for particle physics it doesn't quite work for drug design. A physicist may see the human body or even a protein-drug system as a complex machine whose understandings we can completely understand once we break it down into its constituent parts. But the chemical and biological systems that drug discoverers deal with are classic examples of emergent phenomena. A network of proteins displays properties that are not obvious from the behavior of the individual proteins. . .Reductionism certainly doesn't work in drug discovery in practice since the systems are so horrendously complicated, but it may not even work in principle.

And there we have one of the big underlying issues that needs to be faced by the hardware engineers, software programmers, and others who come in asking why we can't be as productive as they are. There's not a lot of algorithmic compressibility in this business. Whether they know it or not, many other scientists and engineers are living in worlds where they're used to it being there when they need it. But you won't find much here.

Comments (22) + TrackBacks (0) | Category: Drug Assays | Drug Development

February 22, 2012

Scaling Up a Strange Dinitro Compound (And Others)

Email This Entry

Posted by Derek

I wrote here about a very unusual dinitro compound that's in the clinic in oncology. Now there's a synthetic chemistry follow-up, in the form of a paper in Organic Process R&D.
dinitro.png

It's safe to say that most process and scale-up chemists are never going to have to worry about making a gem-dinitroazetidine - or, for that matter, a gem-dinitroanything. But the issues involved are the same ones that come up over and over again. See if this rings any bells:

Gram quantities of (3) for initial anticancer screening were originally prepared by an unoptimized approach that was not suitable for scale-up and failed to address specific hazards of the reaction intermediates and coproducts. The success of (3) in preclinical studies prompted the need for a safe, reliable, and scalable synthesis to provide larger supplies of the active pharmaceutical ingredient (API) for further investigation and eventual clinical trials.

Yep, it's when you need large, reliable batches of something that the inadequacies of your chemistry really stand out. The kinds of chemistry that people like me do, back in the discovery labs, often has to be junked. It's fine for making 100mg of something to put in the archives - and tell me, when was the last time you put as much as 100 milligrams of a new compound into the archives? But there are usually plenty of weak points as you try to go to gram, then hundreds of grams, then kilos and up. Among them are:

(1) Exothermic chemistry. Excess heat is easy to shed from a 25-mL round-bottom flask. Heat is not so easily lost from larger vessels, though, and the number of chemists who have had to discover this the hard way is beyond counting. The world is very different when everything in the flask is no longer just 1 cm away from a cold glass wall.

(2) Stirring. This can be a pain even on the small scale, so imagine what a headache it is by the kilo. Gooey precipitates, thick milkshake-like reactions, lumps of crud - what's inconvenient when small can turn into a disaster later on, because poor stirring leads to localized heating (see above), incomplete reactions, side products, and more.

(3) Purification. Just run it down a column? Not so fast, chief. Where, exactly, do you find the columns to run kilos of material across? And the pumps to force the stuff through? And the wherewithal to dispose of all that solid-phase stuff once you've turned it all those colors and it can't be used again? And the time and money to evaporate all that solvent that you're using? No, the scale-up people will go a long way to avoid chromatography. Precipitations and crystallizations are the way to go, if at all possible.

Reproducibility. All of these factors influence this part. One of the most important things about a good chemical process is that it works the same flippin' way every single time. As has been said before around here, a route that generates 97% yield most of the time, but with an occasional mysterious 20% flop, is useless. Worse than useless. Squeezing the mystery out of the synthesis is the whole point of process chemistry: you want to know what the side products are, why they form, and how to control every variable.

Oh yeah. Cost.Cost-of-goods is rarely a deal-breaker in drug research, but that's partly because people are paying attention to it. In the med-chem labs, we think nothing of using exotic reagents that the single commercial supplier marks up to the sky. That will not fly on scale. Cutting out three steps with a reagent that isn't obtainable in quantity doesn't help the scale-up people one bit. (The good news is that some of these things turn out to be available when someone really wants them - the free market in action).

There are other factors, but those are some of the main ones. It's a different world, and it involves thinking about things that a discovery chemist just never thinks about. (Does your product tend to create a fine dust on handling? The sort that might fill a room and explode with static electricity sparks? Can your reaction mixture be pumped through a pipe as a slurry, or not? And so on.) It looks as if the dinitro compound has made it through this gauntlet successfully, but every day, there's someone at some drug company worrying about the next candidate.

Comments (19) + TrackBacks (0) | Category: Drug Development | Life in the Drug Labs

February 10, 2012

The Terrifying Cost of a New Drug

Email This Entry

Posted by Derek

Matthew Herper at Forbes has a very interesting column, building on some data from Bernard Munos (whose work on drug development will be familiar to readers of this blog). What he and his colleague Scott DeCarlo have done is conceptually simple: they've gone back over the last 15 years of financial statements from a bunch of major drug companies, and they've looked at how many drugs each company has gotten approved.

Over that long a span, things should even out a bit. There will be some spending which won't show up in the count, that took place on drugs that got approved during the earlier part that span, but (on the back end) there's spending on drugs in there that haven't made it to market yet, too. What do the numbers look like? Hideous. Appalling. Unsustainable.

AstraZeneca, for example, got 5 drugs on the market during this time span, the worst performance on this list, and thus spent spent nearly $12 billion dollars per drug. No wonder they're in the shape they're in. GSK, Sanofi, Roche, and Pfizer all spent in the range of $8 billion per approved drug. Amgen did things the cheapest by this measure, 9 drugs approved at about 3.7 billion per drug.

Now, there are several things to keep in mind about these numbers. First - and I know that I'm going to hear about this from some people - you might assume that different companies are putting different things under the banner of R&D for accounting purposes. But there's a limit to how much of that you can do. Remember, there's a separate sales and marketing budget, too, of course, and people never get tired of pointing out that it's even larger than the R&D one. So how inflated can these figures be? Second, how can these numbers jibe with the 800-million-per-new-drug (recently revised to $1 billion), much less with the $43 million per new drug figure (from Light and Warburton) that was making the rounds a few months ago?

Well, I tried to dispose of that last figure at the time. It's nonsense, and if it were true, people would be lining up to start drug companies (and other people would be throwing money at them to help). Meanwhile, the drug companies that already exist wouldn't be frantically firing thousands of people and selling their lab equipment at auction. Which they are. But what about that other estimate, the Tufts/diMasi one? What's the difference?

As Herper rightly says, the biggest factor is failure. The Tufts estimate is for the costs racked up by one drug making it through. But looking at the whole R&D spend, you can see how money is being spent for all the stuff that doesn't get through. And as I and many of the other readers of this blog can testify, there's an awful lot of it. I'm now in my 23rd year of working in this industry, and nothing I've touched has ever made it to market yet. If someone wins $500 from a dollar slot machine, the proper way to figure the costs is to see how many dollars, total, they had to pump into the thing before they won - not just to figure that they spent $1 to win. (Unless, of course, they just sat down, and in this business we don't exactly have that option).

No, these figures really show you why the drug business is in the shape it's in. Look at those numbers, and look at how much a successful drug brings in, and you can see that these things don't always do a very good job of adding up. That's with the expenses doing nothing but rising, and the success rate for drug discovery going in the other direction, too. No one should be surprised that drug prices are rising under these conditions. The surprise is that there are still people out there trying to discover drugs.

Comments (62) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices

January 26, 2012

Putting a Number on Chemical Beauty

Email This Entry

Posted by Derek

There's a new paper out in Nature Chemistry called "Quantifying the Chemical Beauty of Drugs". The authors are proposing a new "desirability score" for chemical structures in drug discovery, one that's an amalgam of physical and structural scores. To their credit, they didn't decide up front which of these things should be the miost important. Rather, they took eight properties over 770 well-known oral drugs, and set about figuring how much to weight each of them. (This was done, for the info-geeks among the crowd, by calculating the Shannon entropy for each possibility to maximize the information contained in the final model). Interestingly, this approach tended to give zero weight to the number of hydrogen-bond acceptors and to the polar surface area, which suggests that those two measurements are already subsumed in the other factors.

And that's all fine, but what does the result give us? Or, more accurately, what does it give us that we haven't had before? After all, there have been a number of such compound-rating schemes proposed before (and the authors, again to their credit, compare their new proposal with the others head-to-head). But I don't see any great advantage. The Lipinski "Rule of 5" is a pretty simple metric - too simple for many tastes - and what this gives you is a Rule of 5 with both categories smeared out towards each other to give some continuous overlap. (See the figure below, which is taken from the paper). That's certainly more in line with the real world, but in that real world, will people be willing to make decisions based on this method, or not?
QED%20paper%20chart%20png.png
The authors go for a bigger splash with the title of the paper, which refers to an experiment they tried. They had chemists across AstraZeneca's organization assess some 17,000 compounds (200 or so for each) with a "Yes/No" answer to "Would you undertake chemistry on this compound if it were a hit?" Only about 30% of the list got a "Yes" vote, and the reasons for rejecting the others were mostly "Too complex", followed closely by "Too simple". (That last one really makes me wonder - doesn't AZ have a big fragment-based drug design effort?) Note also that this sort of experiment has been done before.

Applying their model, the mean score for the "Yes" compounds was 0.67 (s.d.0.16), and the mean score for the "No" compounds was 0.49 (s.d. 0.23, which they say was statistically significant, although that must have been a close call. Overall, I wouldn't say that this test has an especially strong correlation with medicinal chemists' ideas of structural attractiveness, but then, I'm not so sure of the usefulness of those ideas to start with. I think that the two ends of the scale are hard to argue with, but there's a great mass of compounds in the middle that people decide that they like or don't like, without being able to back up those statements with much data. (I'm as guilty as anyone here).

The last part of the paper tries to extend the model from hit compounds to the targets that they bind to - a druggability assessment. The authors looked through the ChEMBL database, and ranked the various target by the scores of the ligands that are associated with them. They found that their mean ligand score for all the targets in there is 0.478. For the targets of approved drugs, it's 0.492, and for the orally active ones it's 0.539 - so there seems to be a trend, although if those differences reached statistical significance, it isn't stated in the paper.

So overall, I find nothing really wrong with this paper, but nothing spectacularly right with it, either. I'd be interested in hearing other calls on it as it gets out into the community. . .

Comments (22) + TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico | Life in the Drug Labs

January 18, 2012

Fun With Epigenetics

Email This Entry

Posted by Derek

If you've been looking around the literature over the last couple of years, you'll have seen an awful lot of excitement about epigenetic mechanisms. (Here's a whole book on that very subject, for the hard core). Just do a Google search with "epigenetic" and "drug discovery" in it, any combination you like, and then stand back. Articles, reviews, conferences, vendors, journals, startups - it's all there.

Epigenetics refers to the various paths - and there are a bunch of them - to modify gene expression downstream of just the plain ol' DNA sequence. A lot of these are, as you'd imagine, involved in the way that the DNA itself is wound (and unwound) for expression. So you see enzymes that add and remove various switches to the outside of various histone proteins. You have histone acyltransferases (HATs) and histone deacetylases (HDACs), methyltransferases and demethylases, and so on. Then there are bromodomains (the binding sites for those acetylated histones) and several other mechanisms, all of which add up to plenty o' drug targets.

Or do they? There are HDAC compounds out there in oncology, to be sure, and oncology is where a lot of these other mechanisms are being looked at most intensively. You've got a good chance of finding aberrant protein expression levels in cancer cells, you have a lot of unmet medical need, a lot of potential different patient populations, and a greater tolerance for side effects. All of that argues for cancer as a proving ground, although it's certainly not the last word. But in any therapeutic area, people are going to have to wrestle with a lot of other issues.

Just looking over the literature can make you both enthusiastic and wary. There's an awful lot of regulatory machinery in this area, and it's for sure that it isn't there for jollies. (You'd imagine that selection pressure would operate pretty ruthlessly at the level of gene expression). And there are, of course, an awful lot of different genes whose expression has to be regulated, at different levels, in different cell types, at different phases of their development, and in response to different environmental signals. We don't understand a whole heck of a lot of the details.

So I think that there will be epigenetic drugs coming out of this burst of effort, but I don't think that they're going to exactly be the most rationally designed things we've ever seen. That's fine - we'll take drug candidates where we can get them. But as for when we're actually going to understand all these gene regulation pathways, well. . .

Comments (15) + TrackBacks (0) | Category: Biological News | Cancer | Drug Development

January 16, 2012

Biogen: A "Decimated" Pipeline?

Email This Entry

Posted by Derek

You don't want coverage like this: "Biogen CEO Tries to Refill Early-Stage Pipeline He Decimated". That would be George Scanos:

. . .Scangos and his research chief eliminated about 17 early-stage drug projects in 2010 and last year to hone the company's focus, leaving it with only about four early-stage compounds. Biogen exited oncology and cardiovascular research and is now targeting drugs to treat neurological and autoimmune conditions. . .

"We didn't want to fund projects that were unlikely to generate value," Scangos said in an interview on the sidelines of the J.P. Morgan health-care conference in San Francisco this week. . .But even if Biogen's late-stage pipeline delivers successful new drugs soon, the company needs more compounds in early-stage testing to sustain long-term growth. So it is licensing drugs from other companies. . .

The article itself (from Peter Loftus, originally in the Wall Street Journal isn't quite as harsh as the headline. As as that excerpt shows, part of the problem is that Scanos thought that the company was in some therapeutic areas that they shouldn't have been in at all, so that pipeline he's refilling isn't exactly the same one he cleared out. (And a note to the WSJ headline writers: "decimated" isn't a synonym for "got rid of a lot", although that horse, I fear, left the barn a long time ago. The mental image of decimating a pipeline isn't the sharpest vision ever conjured up by a headline, either, but I understand that these things are done on deadline.)

No, if I had to pick the biggest expensive reversal done under Biogen's new management, I'd pick the construction site a few blocks from here where they're putting up the company's new Cambridge headquarters. Those are the offices that used to be in. . .well, Cambridge, until former CEO Jim Mullen moved them out to Weston just a couple of years ago. I don't know how long it's going to take them to finish those buildings (right now, they're just past the bare-ground stage), but maybe eventually they can all work there for a few months before someone else decides to move them to Northhampton, Nashua, or Novosibirsk.

Comments (20) + TrackBacks (0) | Category: Drug Development

January 12, 2012

Welcome To the Jungle! Here's Your Panther.

Email This Entry

Posted by Derek

English has no word of its own for schadenfreude, so we've had to appropriate the German one, and we're in the process of making it our own - just as we did with "kindergarten", not to mention "ketchup" and "pyjamas", among fifty zillion more. That's because the emotion is not peculiar to German culture, oh no. We can feel shameful joy at others' discomfort with the best of them - like, for example, when people start to discover from experience just how hard drug discovery really is.

John LaMattina has an example over at Drug Truths. Noting the end of a research partnership between Eli Lilly and the Indian company Zydus Cadila, he picked up on this language:

“Developing a new drug from scratch is getting more expensive due to increased regulatory scrutiny and high costs of clinical trials. Lowering costs through a partnership with an Indian drug firm was one way of speeding up the process, but the success rate has not been very high.”

And that, as he correctly notes, is no slam on the Indian companies involved, just as it won't be one on the Chinese companies when they run into the same less-than-expected returns. No, the success rate has not been very high anywhere. Going to India and China might cut your costs a bit (although that window is slowly closing as we watch), but for early-stage research, the costs are not the important factor.

Everything we do in preclinical is a roundoff error compared to a big Phase III trial, as far as direct costs go. What we early-stage types specialize in, God help us, are opportunity costs, and those don't get reported on the quarterly earnings statements. There's no GAAP way to handle the cost of going for the wrong series of lead compounds on the way to the clinic, starting a program on the wrong target entirely, or not starting one instead on something that would have actually panned out. These are the big decisions in early stage research, and they're all judgment calls based on knowledge that is always incomplete. You will not find the answers to the questions just by going to Shanghai or Bangalore. The absolute best you can hope for is to spend a bit less money while searching for them, and thus shave some dollars off what is the smallest part of your R&D budget to start with. Sound like a good deal?

Relative to the other deals on offer, it might just be worthwhile. Such is the state of things, and such are the savings that people are willing to reach for. But when you're in the part of drug discovery that depends on feeling your way into unknown territory - the crucial part - you shouldn't expect any bargains.

Comments (18) + TrackBacks (0) | Category: Business and Markets | Drug Development

January 6, 2012

Do We Believe These Things, Or Not?

Email This Entry

Posted by Derek

Some of the discussions that come up here around clinical attrition rates and compound properties prompts me to see how much we can agree on. So, are these propositions controversial, or not?

1. Too many drugs fail in clinical trials. We are having a great deal of trouble going on with these failure rates, given the expense involved.

2. A significant number of these failures are due to lack of efficacy - either none at all, or not enough.

2a. Fixing efficacy failures is hard, since it seems to require deeper knowledge, case-by-case, of disease mechanisms. As it stands, we get a significant amount of this knowledge from our drug failures themselves.

2b. Better target selection without such detailed knowledge is hard to come by. Good phenotypic assays are perhaps the only shortcut, but a good phenotypic assays are not easy to develop and validate.

3. Outside of efficacy, a significant number of clinical failures are also due to side effects/toxicity. These two factors (efficacy and tox) account for the great majority of compounds that drop out of the clinic.

3a. Fixing tox/side effect failures through detailed knowledge is perhaps hardest of all, since there are a huge number of possible mechanisms. There are far more ways for things to go wrong than there are for them to work correctly.

3b. But there are broad correlations between molecular structures and properties and the likelihood of toxicity. While not infallible, these correlations are strong enough to be useful, and we should be grateful for anything we can get that might diminish the possibility of later failure.

Example of such structural features are redox-active groups like nitros and quinones, which really are associated with trouble - not invariably, but enough to make you very cautious. More broadly, high logP values are also associated with trouble in development - not as strongly, but strong enough to be worth considering.

So, is everyone pretty much in agreement with these things? What I'm saying is that if you take a hundred aryl nitro compounds into development, versus a hundred that don't have such a group, the latter cohort of compounds will surely have a higher success rate. And if you take a hundred compounds with logP values of 1 to 3 into development, these will have a higher success rate than a hundred compounds, against the same targets, with logP of 4 to 6. Do we believe this, or not?

Comments (34) + TrackBacks (0) | Category: Drug Assays | Drug Development | Toxicology

January 5, 2012

Lead-Oriented Synthesis - What Might That Be?

Email This Entry

Posted by Derek

A new paper in Angewandte Chemie tries to open another front in relations between academic and drug industry chemists. It's from several authors at GSK-Stevenage, and it proposes something they're calling "Lead-Oriented Synthesis". So what's that?

Well, the paper itself starts out as a quick tutorial on the state and practice of medicinal chemistry. That's a good plan, since Angewandte Chemie is not primarily a med-chem journal (he said with a straight face). Actually, it has the opposite reputation, a forum where high-end academic chemistry gets showplaced. So the authors start off by reminded the readership what drug discovery entails. And although we've had plenty of discussions around here about these topics, I think that most people can agree on the main points laid out:

1. Physical properties influence a drug's behavior.
2. Among those properties, logP may well be the most important single descriptor,
3. Most successful drugs have logP values between 1 and perhaps 4 or 5. Pushing the lipophilicity end of things is, generally speaking, asking for trouble.
4. Since optimization of lead compounds almost always adds molecular weight, and very frequently adds lipophilicity, lead compounds are better found in (and past) the low ends of these property ranges, to reduce the risk of making an unwieldy final compound.

As the authors take pains to say, though, there are many successful drugs that fall outside these ranges. But many of those turn out to have some special features - antibacterial compounds (for example) tend to be more polar outliers, for reasons that are still being debated. There is, though, no similar class of successful less polar than usual drugs, to my knowledge. If you're starting a program against a target that you have no reason to think is an outlier, and assuming you want an oral drug for it, then your chances for success do seem to be higher within the known property ranges.

So, overall, the GSK folks maintain that lead compounds for drug discovery are most desirable with logP values between -1 and 3, molecular weights from around 200 to 350, and no problematic functional groups (redox-active and so on). And I have to agree; given the choice, that's where I'd like to start, too. So why are they telling all this to the readers of Angewandte Chemie? Because these aren't the sorts of compounds that academic chemists are interested in making.

For example, a survey of the 2009 issues of the Journal of Organic Chemistry found about 32,700 compounds indexed with the word "preparation" in Chemical Abstracts, after organometallics, isotopically labeled compounds, and commercially available ones were stripped out. 60% of those are outside the molecular weight criteria for lead-like compounds. Over half the remainder fail cLogP, and most of the remaining ones fail the internal GSK structural filters for problematic functional groups. Overall, only about 2% of the JOC compounds from that year would be called "lead-like". A similar analysis across seven other synthetic organic journals led to almost the same results.

Looking at array/library synthesis, as reported in the Journal of Combinatorial Chemistry and from inside GSK's own labs, the authors quantify something else that most chemists suspected: the more polar structures tend to drop out as the work goes on. This "cLogP drift" seems to be due to incompatible chemistries or difficulties in isolation and purification, and this could also illustrate why many new synthetic methods aren't applied in lead-like chemical space: they don't work as well there.

So that's what underlies the call for "lead-oriented synthesis". This paper is asking for the development of robust reactions which will work across a variety of structural types, will be tolerant of polar functionalities, and will generate compounds without such potentially problematic groups as Michael acceptors, nitros, and the like. That's not so easy, when you actually try to do it, and the hope is that it's enough of a challenge to attract people who are trying to develop new chemistry.

Just getting a high-profile paper of this sort out into the literature could help, because it's something to reference in (say) grant applications, to show that the proposed research is really filling a need. Academic chemists tend, broadly, to work on what will advance or maintain their positions and careers, and if coming up with new reactions of this kind can be seen as doing that, then people will step up and try it. And the converse applies, too, and how: if there's no perceived need for it, no one will bother. That's especially true when you're talking about making molecules that are smaller than the usual big-and-complex synthetic targets, and made via harder-than-it-looks chemistry.

Thoughts from the industrial end of things? I'd be happy to see more work like this being done, although I think it' going to take more than one paper like this to get it going. That said, the intersection with popular fragment-based drug design ideas, which are already having an effect in the purely academic world of diversity-oriented synthesis, might give an extra impetus to all this.

Comments (34) + TrackBacks (0) | Category: Chemical News | Drug Assays | Drug Development | The Scientific Literature

December 22, 2011

More From Hua - A Change of Business Plans?

Email This Entry

Posted by Derek

You may remember the mention of Hua Pharmaceuticals here back in August, and the follow-up with details from the company. They're trying to in-license drugs from other companies and get them approved as quickly as possible in China. The original C&E News article made them sound wildly ambitious, while the company's own information just made them sound very ambitious.

Now we have some more information: Roche has licensed their glucokinase activator program (for diabetes) to Hua (that's a development effort I wrote about here). And that's an interesting development, because the Hua folks told me that:

"Hua Medicine intends to in-license patented drugs from the US and EU, and get them on the market and commercialized in the 4 year timeframe in China. This is about the average time it takes imported drugs (drugs that are approved and marketed in the US or EU but are coming newly into the Chinese market) to get approved by the SFDA in China."

And that's fine, but Roche's glucokinase activators haven't been approved or marketed anywhere yet. In fact, I'm not at all sure of the lead compound ever even made it to Phase III, so there's a lot of expensive work to be done yet, and on a groundbreaking mechanism, too. The only thing I can say is that approval in the US for diabetes drugs has gotten a lot harder over the years - the market is pretty well-served, for one thing, and the safety requirements (particularly cardiovascular) have gotten much more stringent. Perhaps these concerns are not so pressing in China, leading to an easier development path?

Easier or not, these compounds have a lot of time and money left to be put into them, which is not the sort of program that Hua seemed to be targeting before. One wonders if there just weren't any safer bets available. At any rate, good luck to them, and to their financial backers. Some will be needed; it always is.

Comments (8) + TrackBacks (0) | Category: Business and Markets | Diabetes and Obesity | Drug Development

December 13, 2011

The Sirtuin Saga

Email This Entry

Posted by Derek

Science has a long article detailing the problems that have developed over the last few years in the whole siturin story. That's a process that I've been following here as well (scrolling through this category archive will give you the tale), but this is a different, more personality-driven take. The mess is big enough to warrant a long look, that 's for sure:

". . .The result is mass confusion over who's right and who's wrong, and a high-stakes effort to protect reputations, research money, and one of the premier theories in the biology of aging. It's also a story of science gone sour: Several principals have dug in their heels, declined to communicate, and bitterly derided one another. . ."

As the article shows, one of the problems is that many of the players in this drama came out of the same lab (Leonard Guarente's at MIT), so there are issues even beyond the usual ones. Mentioned near the end of the article is the part of the story that I've spent more time on here, the founding of Sirtris and its acquisition by GlaxoSmithKline. It's safe to say that the jury is still out on that one - from all that anyone can tell from outside, it could still work out as a big diabetes/metabolism/oncology success story, or it could turn out to have been a costly (and arguably preventable) mistake. There are a lot of very strongly held opinions on both sides.

Overall, since I've been following this field from the beginning, I find the whole thing a good example of how tough it is to make real progress in fundamental biology. Here you have something that is (or at the very least has appeared to be) very interesting and important, studied by some very hard-working and intelligent people all over the world for years now, with expenditure of huge amounts of time, effort, and money. And just look at it. The questions of what sirtuins do, how they do it, and whether they can be the basis of therapies for human disease - and which diseases - are all still the subject of heated argument. Layers upon layers of difficulty and complexity get peeled back, but the onion looks to be as big as it ever was.

I'm going to relate this to my post the other day about the engineer's approach to biology. This sort of tangle, which differs only in degree and not in kind from many others in the field, illustrates better than anything else how far away we are from formalism. Find some people who are eager to apply modern engineering techniques to medical research, and ask them to take a crack at the sirtuins. Or the nuclear receptors. Or autoimmune disease, or schizophrenia therapies. Turn 'em loose on one of those problems, come back in a year, and see what color their remaining hair is.

Comments (9) + TrackBacks (0) | Category: Aging and Lifespan | Drug Development | Drug Industry History

December 9, 2011

Drugs, Airplanes, and Radios

Email This Entry

Posted by Derek

Wavefunction has a good post in response to this article, which speculates "If we designed airplanes the way we design drugs. . ." I think the original article is worth reading, but some - perhaps many - of its points are arguable. For example:

Every drug that fails in a clinical trial or after it reaches the market due to some adverse effect was “bad” from the day it was first drawn by the chemist. State-of-the-art in silico structure–property prediction tools are not yet able to predict every possible toxicity for new molecular structures, but they are able to predict many of them with good enough accuracy to eliminate many poor molecules prior to synthesis. This process can be done on large chemical libraries in very little time. Why would anyone design, synthesize, and test molecules that are clearly problematic, when so many others are available that can also hit the target? It would be like aerospace companies making and testing every possible rocket motor design rather than running the simulations that would have told them ahead of time that disaster or failure to meet performance specifications was inevitable for most of them.

This particular argument mixes up several important points which should remain separate. Would these simulations have predicted those adverse-effect failures the author mentions? Can they do so now, ex post facto? That would be a very useful piece of information, but in its absence I can't help but wonder if the tools he's talking about would have cheerfully passed Vioxx, or torcetrapib, or the other big failures of recent years. Another question to ask is how many currently successful drugs these tox simulations would have killed off - any numbers there?

The whole essay recalls Lazebnik's famous paper "Can A Biologist Fix A Radio?" (PDF). This is an excellent place to start if you want to explore what I've called the Andy Grove Fallacy. Lazebnik's not having any of the reasons I give for it being a fallacy - for example:

A related argument is that engineering approaches are not applicable to cells because these little wonders are fundamentally different from objects studied by engineers. What is so special about cells is not usually specified, but it is implied that real biologists feel the difference. I consider this argument as a sign of what I call the urea syndrome because of the shock that the scientific community had two hundred years ago after learning that urea can be synthesized by a chemist from inorganic materials. It was assumed that organic chemicals could only be produced by a vital force present in living organisms. Perhaps, when we describe signal transduction pathways properly, we would realize that their similarity to the radio is not superficial. . .

That paper goes on to call for biology to come up with some sort of formal language and notation to describe biochemical systems, something that would facilitate learning and discovery in the same way as circuit diagrams and the like. And that's a really interesting proposal on several levels: would that help? Is it even possible? If so, where to even start? Engineers, like the two authors of the papers I've quoted from, tend to answer "Yes", "Certainly", and "Start anywhere, because it's got to be more useful than what you people have to work with now". But I'm still not convinced.

I've talked about my reasons for this before, but let me add another one: algorithmic complexity. Fields more closely based on physics can take advantage of what's been called "the unreasonable effectiveness" of mathematics. And mathematics, and the principles of physics that can be stated in that form, give an amazingly compact and efficient description of the physical world. Maxwell's equations are a perfect example: there's classical electromagnetism for you, wrapped up into a beautiful little sculpture.

But biological systems are harder to reduce - much harder. There are so many nonlinear effects, so many crazy little things that can add up to so much more than you'd ever think. Here's an example - I've been writing about this problem for years now. It's very hard to imagine compressing these things into a formalism, at least not one that would be useful enough to save anyone time or effort.

That doesn't mean it isn't worth trying. Just the fact that I have trouble picturing something doesn't mean it can't exist, that's for sure. And I'd definitely like to be wrong about this one. But where to begin?

Comments (36) + TrackBacks (0) | Category: Drug Development | Drug Industry History

December 6, 2011

Riding to the Rescue of Rhodanines

Email This Entry

Posted by Derek

There's a new paper coming to the defense of rhodanines, a class of compound that has been described as "polluting the scientific literature". Industrial drug discovery people tend to look down on them, but they show up a lot, for sure.

This new paper starts off sounding like a call to arms for rhodanine fans, but when you actually read it, I don't think that there's much grounds for disagreement. (That's a phenomenon that's worth writing about sometime by itself - the disconnects between title/abstract and actual body text that occur in the scientific literature). As I see it, the people with a low opinion of rhodanines are saying "Look out! These things hit in a lot of assays, and they're very hard to develop into drugs!". And this paper, when you read the whole thing, is saying something like "Don't throw away all the rhodanines yet! They hit a lot of things, but once in a while one of them can be developed into a drug!" The argument is between people who say that elephants are big and people who say that they have trunks.

The authors prepared a good-sized assortment of rhodanines and similar heterocycles (thiohydantoins, hydantoins, thiazolidinediones) and assayed them across several enzymes. Only the ones with double-bonded sulfur (rhodanines and thiohydantoins) showed a lot of cross-enzyme potency - that group has rather unusual electronic properties, which could be a lot of the story. Here's the conclusion, which is what makes me think that we're all talking about the same thing:

We therefore think that rhodanines and related scaffolds should not be regarded as problematic or promiscuous binders per se. However, it is important to note that the intermolecular interaction profile of these scaffolds makes them prone to bind to a large number of targets with weak or moderate affinity. It may be that the observed moderate affinities of rhodanines and related compounds, e.g. in screening campaigns, has been overinterpreted in the past, and that these compounds have too easily been put forward as lead compounds for further development. We suggest that particularly strong requirements, i.e. affinity in the lower nanomolar range and proven selectivity for the target, are applied in the further assessment of rhodanines and related compounds. A generalized "condemnation" of these chemotypes, however, appears inadequate and would deprive medicinal chemists from attractive building blocks that possess a remarkably high density of intermolecular interaction points.

That's it, right there: the tendency to bind off-target, as noted by these authors, is one of the main reasons that these compounds are regarded with suspicion in the drug industry. We know that we can't test for everything, so when you have one of these structures, you're always fearful of what else it can do once it gets into an animal (or a human). Those downstream factors - stability, pharmacokinetics, toxicity - aren't even addressed in this paper, which is all about screening hits. And that's another source of the bad reputation, for industry people: too many times, people who aren't so worried about those qualities have screening commercial compound collections, come up with rhodanines, and published them as potential drug leads, when (as this paper illustrates), you have to be careful even using them as tool compounds. Given a choice, we'd just rather work on something else. . .

Comments (7) + TrackBacks (0) | Category: Drug Assays | Drug Development | The Scientific Literature

November 18, 2011

Pushing Onwards with CETP: The Big Money and the Big Risks

Email This Entry

Posted by Derek

Remember torcetrapib? Pfizer always will. The late Phase III failure of that CETP inhibitor wiped out their chances for an even bigger HDL-raising follow-up to LDL-lowering Lipitor, the world's biggest drug, and changed the future of the company in ways that are still being played out.

But CETP inhibition still makes sense, biochemically. And the market for increasing HDL levels is just as huge as it ever was, since there's still no good way to do it. Merck is pressing ahead with anacetrapib, Roche with dalcetrapib, and Lilly is out with recent data on evacetrapib. All three companies have tried to learn as much as they could from Pfizer's disaster, and are keeping a close eye on the best guesses for why it happened (a small rise in blood pressure and changes in aldosterone levels). So far, so good - but that only takes you so far. Those toxicological changes are reasonable, but they're only hypotheses for why torcetrapib showed a higher death rate in the drug treatment group than it did in the controls. And even that only takes you up to the big questions.

Which are: will raising HDL really make a difference in cardiovascular morbidity and mortality? And if so, is inhibiting CETP the right way to do it? Human lipidology is not nearly as well worked out as some people might think it is, and these are both still very open questions. But such drugs, and such trials, are the only way that we're going to find out the answers. All three companies are risking hundreds of millions of dollars (in an area that's already had one catastrophe) in an effort to find out, and (to be sure) in the hope of making billions of dollars if they're correct.

Will anyone make it through? Will they fail for tox like Pfizer did, telling us that we don't understand CETP inhibitors? Or will they make it past that problem, but not help patients as much as expected, telling us that we don't understand CETP itself, or HDL? Or will all three work as hoped, and arrive in time to split up the market ferociously, making none of them as profitable as the companies might have wanted? If you want to see what big-time drug development is like, I can't think of a better field to illustrate it.

Comments (17) + TrackBacks (0) | Category: Cardiovascular Disease | Drug Development | Toxicology

October 31, 2011

"You Guys Don’t Do Innovation. The iPad. That’s Innovative"

Email This Entry

Posted by Derek

Thoughts from Matthew Herper at Forbes about Steve Jobs, modern medicine, what innovation means, and why it can be so hard in some fields. This is relevant to this post and its precursors.

Comments (41) + TrackBacks (0) | Category: Drug Development | Who Discovers and Why

October 26, 2011

Francis Collins Speaks

Email This Entry

Posted by Derek

With all the recent talk about the NIH's translational research efforts, and the controversy about their drug screening efforts, this seems like a good time to note this interview with Francis Collins over at BioCentury TV. (It's currently the lead video, but you'll be able to find it in their "Show Guide" afterwards as well).

Collins says that they're not trying to compete with the private sector, but taking a look at the drug development process "the way an engineer would", which takes me back to this morning's post re: Andy Grove. One thing he emphasizes is that he believes that the failure rate is too high because the wrong targets are being picked, and that target validation would be a good thing to improve.

He's also beating the drum for new targets to come out of more sequencing of human genomes, but that's something I'll reserve judgment on. The second clip has some discussion of the DARPA-backed toxicology chip and some questions on repurposing existing drugs. The third clip talks about the FDA's role in all this, and tries to clarify what NIH's role would be in outlicensing any discoveries. (Collins also admits along the way that the whole NCATS proposal has needed some clarifying as well, and doesn't sound happy with some of the press coverage).

Part 5 (part 4 is just a short wrap-up) discusses the current funding environment, and then moves into ethics and conflicts of interest - other people's conflicts, I should note. Worth a lunchtime look!

Comments (16) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development

A Note to Andy Grove

Email This Entry

Posted by Derek

Readers will recall my occasional pieces on Intel legend Andy Grove's idea for drug discovery. (The first one wasn't too complimentary; the second was a bit more neutral). You always wonder, when you have a blog, if the people you're writing about have a chance to see what you've said - well, in this case, that question's been answered. Here's a recent article by Lisa Krieger in the San Jose Mercury News, detailing Grove's thoughts on medical innovation. Near the end, there's this:

Some biotech insiders are angered by Grove's dismissal of their dedication to the cause.

"It would be daft to suggest that if biopharma simply followed the lead of the semiconductor industry, all would be well," wrote Kevin Davies in the online journal Bio-IT World.com. "The semiconductor industry doesn't have the complex physiology of the human body -- or the FDA, for that matter, to contend with."

In his blog "In The Pipeline," biochemist Derek Lowe called Grove "rich, famous, smart and wrong." Grove's recent editorial, Lowe said, "is not a crazy idea, but I think it still needs some work. ... The details of it, which slide by very quickly in Grove's article, are the real problems. Aren't they always?"

Grove sighed.

"Sticks and stones. ... There were brutal comments but I don't care. The typical comment is 'Chips are not people, go (expletive) yourself.' But to not look over to the other side to see what other people in other professions have done -- that is a lazy intellectual activity."

My purpose in these posts, of course, has not been to insult Andy Grove. That doesn't get any of us anywhere. What I'd like to do, though, since he's clearly sincere about trying to speed up the pace of drug discovery (and with good reason), is to help get him up to speed on what it's like to actually discover drugs. It's not his field; it is mine. But I should note here that being an "expert" in drug discovery doesn't exactly give you a lot of great tools to insure success, unfortunately. What it does give you is the rough location of a lot of sinkholes that you might want to try to avoid. ("So you can go plunge into new, unexplored sinkholes", says a voice from the back.)

Grove's certainly a man worth taking seriously, and I hope that he, in turn, takes seriously those of us over here in the drug industry. This really is a strange business, and it's worth getting to know it. People like me - and there are still a lot of us, although it seems from all the layoffs that there are fewer every month - are the equivalents of the chip designers and production engineers at Intel. We have one foot in the labs, trying to troubleshoot this or that process, and figure out what the latest results mean. And we have one foot in the offices, where we try to see where the whole effort is going, and where it should go next. I think that perspectives from this level of drug research would be useful for someone like Andy Grove to experience: not so far down in the details that you can't see the sky, but not so far up in the air that all you see are the big, sweeping vistas.

And conversely, I think that we should take him up on his offer to look at what people in the chip industry (and others) have done. It can't hurt; we definitely need all the help we can get over here. I can't, off the top of my head, see many things that we could pick up on, for the reasons given in those earlier posts, but then again, I haven't worked over there, in the same way that Andy Grove hasn't worked over here. It's worth a try - and if anyone out there in the readership (journalist, engineer, what have you) would like to forward that on to Grove himself, please do. I'm always surprised at just how many people around the industry read this site, and to start a big discussion among people who actually do drug discovery, you could do worse.

Comments (46) + TrackBacks (0) | Category: Drug Development

October 17, 2011

Harvard to the Rescue

Email This Entry

Posted by Derek

Harvard is announcing a big initiative in systems biology, which is an interdisciplinary opportunity if there ever was one.

The Initiative in Systems Pharmacology is a signature component of the HMS Program in Translational Science and Therapeutics. There are two broad goals: first, to increase significantly our knowledge of human disease mechanisms, the nature of heterogeneity of disease expression in different individuals, and how therapeutics act in the human system; and second — based on this knowledge — to provide more effective translation of ideas to our patients, by improving the quality of drug candidates as they enter the clinical testing and regulatory approval process, thereby aiming to increase the number of efficacious diagnostics and therapies reaching patients.

All worthy stuff, of course. But there are a few questions that come up. These drug candidates that Harvard is going to be improving the quality of. . .whose are those, exactly? Harvard doesn't develop drugs, you know, although you might not realize that if you just read the press releases. And the e-mail announcement sent out to the Harvard Medical School list is rather less modest about the whole effort:

With this Initiative in Systems Pharmacology, Harvard Medical School is reframing classical pharmacology and marshaling its unparalleled intellectual resources to take a novel approach to an urgent problem: The alarming slowdown in development of new and lifesaving drugs.

A better understanding of the whole system of biological molecules that controls medically important biological behavior, and the effects of drugs on that system, will help to identify the best drug targets and biomarkers. This will help to select earlier the most promising drug candidates, ultimately making drug discovery and development faster, cheaper and more effective. A deeper understanding will also help clinicians personalize drug therapies, making better use of medicine we already have.

Again with all those drug candidates - and again, whose candidates are they going to be selecting? Don't get me wrong; I actually wish everyone well in this effort. There really are a lot of excellent scientists at Harvard, even if they tell you so, and this is the sort of problem that can take (and has taken) everything that people can throw at it. But it's also worth remembering Harvard's approach to licensing and industrial collaboration. It's. . .well, let's just say that they didn't get that endowment up to its present size by letting much slip through their fingers. Many are those who've negotiated with the university and come away wanting to add ". . .et Pecunia" to that Latin motto.

So we'll see what comes out of this. But Harvard Medical School is indeed on the case.

Comments (41) + TrackBacks (0) | Category: Drug Development

October 11, 2011

Too Many Cancer Drugs? Too Few? About Right?

Email This Entry

Posted by Derek

According to Bruce Booth (@LifeSciVC on Twitter), Ernst & Young have estimated the proportion of drugs in the clinic in the US that are targeting cancer. Anyone want to pause for a moment to make a mental estimate of their own?

Well, I can tell you that I was a bit low. The E&Y number is 44%. The first thought I have is that I'd like to see that in some historical perspective, because I'd guess that it's been climbing for at least ten years now. My second thought is to wonder if that number is too high - no, not whether the estimate is too high. Assuming that the estimate is correct, is that too high a proportion of drug research being spent in oncology, or not?

Several factors led to the rise in the first place - lots of potential targets, ability to charge a lot for anything effective, an overall shorter and more definitive clinical pathway, no need for huge expensive ad campaigns to reach the specialists. Have these caused us to overshoot?

Comments (22) + TrackBacks (0) | Category: Cancer | Clinical Trials | Drug Development | Drug Industry History

October 7, 2011

Different Drug Companies Make Rather Different Compounds

Email This Entry

Posted by Derek

Now here's a paper, packed to the edges with data, on what kinds of drug candidate compounds different companies produce. The authors assembled their list via the best method available to outsiders: they looked at what compounds are exemplified in patent filings

What they find is that over the 2000-2010 period that not much change has taken place, on average, in the properties of the molecules that are showing up. Note that we're assuming, for purposes of discussion, that these properties - things like molecular weight, logP, polar surface area, amount of aromaticity - are relevant. I'd have to say that they are. They're not the end of the discussion, because there are plenty of drugs that violate one or more of these criteria. But there are even more that don't, and given the finite amount of time and money we have to work with, you're probably better off approaching a new target with five hundred thousand compounds that are well within the drug-like properties boxes rather than five hundred thousand that aren't. And at the other end of things, you're probably better off with ten clinical candidates that mostly fit versus ten that mostly don't.

But even if overall properties don't seem to be changing much, that doesn't mean that there aren't differences between companies. That's actually the main thrust of the paper: the authors compare Abbott, Amgen, AstraZeneca, Bayer-Schering, Boehringer, Bristol-Myers Squibb, GlaxoSmithKline, J&J, Lilly, Merck, Novartis, Pfizer, Roche, Sanofi, Schering-Plough, Takeda, Wyeth, and Vertex. Of course, these organizations filed different numbers of patents, on different targets, with different numbers of compounds. For the record, Merck and GSK filed the most patents during those ten years (over 1500), while Amgen and Takeda filed the fewest (under 300). Merck and BMS had the largest number of unique compounds (over 70,000), and Takeda and Bayer-Schering had the fewest (in the low 20,000s). I should note that AstraZeneca just missed the top two in both patents and compounds.
radar%20plot.jpg
If you just look at the raw numbers, ignoring targeting and therapeutic areas, Wyeth, Bayer-Schering, and Novartis come out looking the worst for properties, while Vertex and Pfizer look the best. But what's interesting is that even after you correct for targets and the like, that organizations still differ quite a bit in the sorts of compounds that they turn out. Takeda, Lilly, and Wyeth, for example, were at the top of the cLogP rankings (numberically, "top" meaning the greasiest). Meanwhile, Vertex, Pfizer, and AstraZeneca were at the other end of the scale in cLogP. In molecular weight, Novartis, Boehringer, and Schering-Plough were at the high end (up around 475), while Vertex was at the low end (around 425). I'm showing a radar-style plot from the paper where they cover several different target-unbiased properties (which have been normalized for scale), and you can see that different companies do cover very different sorts of space. (The numbers next to the company names are the total number of shared targets found and the total number of shared-target observations used - see the paper if you need more details on how they compiled the numbers).

Now, it's fair to ask how relevant the whole sweep of patented compounds might be, since only a few ever make it deep into the clinic. And some companies just have different IP approaches, patenting more broadly or narrowly. But there's an interesting comparison near the end of the paper, where the authors take a look at the set of patents that cover only single compounds. Now, those are things that someone has truly found interesting and worth extra layers of IP protection, and they average to significantly lower molecular weights, cLogP values, and number of rotatable bonds than the general run of patented compounds. Which just gets back to the points I was making in the first paragraph - other things being equal, that's where you'd want to spend more of your time and money.

What's odd is that the trends over the last ten years haven't been more pronounced. As the paper puts it:

blockquote>Over the past decade, the mean overall physico-chemical space used by many pharmaceutical companies has not changed substantially, and the overall output remains worryingly at the periphery of historical oral drug chemical space. This is despite the fact that potential candidate drugs, identified in patents protecting single compounds, seem to reflect physiological and developmental pressures, as they have improved drug-like properties relative to the full industry patent portfolio. Given these facts, and the established influence of molecular properties on ADMET risks and pipeline progression, it remains surprising that many organizations are not adjusting their strategies.

The big question that this paper leaves unanswered, because there's no way for them to answer it, is how these inter-organizational differences get going and how they continue. I'll add my speculations in another post - but speculations they will be.

Comments (30) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

September 2, 2011

How Many New Drug Targets Aren't Even Real?

Email This Entry

Posted by Derek

So, are half the interesting new results in the medical/biology/med-chem literature impossible to reproduce? I linked earlier this year to an informal estimate from venture capitalist Bruce Booth, who said that this was his (and others') experience in the business. Now comes a new study from Bayer Pharmaceuticals that helps put some backing behind those numbers.

To mitigate some of the risks of such investments ultimately being wasted, most pharmaceutical companies run in-house target validation programmes. However, validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced. Talking to scientists, both in academia and in industry, there seems to be a general impression that many results that are published are hard to reproduce. However, there is an imbalance between this apparently widespread impression and its public recognition. . .

Yes, indeed. The authors looked back at the last four years worth of oncology, women's health, and cardiovascular target validation efforts inside Bayer (this would put it right after they combined with Schering AG of Berlin). They surveyed all the scientists involved in early drug discovery in those areas, and had them tally up the literature results they'd acted on and whether they'd panned out or not. I should note that this is the perfect place to generate such numbers, since the industry scientists are not in it for publication glory, grant applications, or tenure reviews: they're interested in finding drug targets that look like they can be prosecuted, in order to find drugs that could make them money. You may or may not find those to be pure or admirable motives (I have no problem at all with them, personally!), but I think we can all agree that they're direct and understandable ones. And they may be a bit orthogonal to the motives that led to the initial publications. . .so, are they? The results:

"We received input from 23 scientists (heads of laboratories) and collected data from 67 projects, most of them (47) from the field of oncology. This analysis revealed that only in ~20–25% of the projects were the relevant published data completely in line with our in-house findings. In almost two-thirds of the projects, there were inconsistencies between published data and in-house data that either considerably prolonged the duration of the target validation process or, in most cases, resulted in termination of the projects. . ."

So Booth's estimate may actually have been too generous. How does this gap get so wide? The authors suggest a number of plausible reasons: small sample sizes in the original papers, leading to statistical problems, for one. The pressure to publish in academia has to be a huge part of the problem - you get something good, something hot, and you write that stuff up for the best journal you can get it into - right? And it's really only the positive results that you hear about in the literature in general, which can extend so far as (consciously or unconsciously) publishing just on the parts that worked. Or looked like they worked.

But the Bayer team is not alleging fraud - just irreproducibility. And it seems clear that irreproducibility is a bigger problem than a lot of people realize. But that's the way that science works, or is supposed to. When you see some neat new result, your first thought should be "I wonder if that's true?" You may have no particular reason to doubt it, but in an area with as many potential problems as discovery of new drug targets, you don't need any particular reasons. Not all this stuff is real. You have to make every new idea perform the same tricks in front of your own audience, on your own stage under bright lights, before you get too excited.

Comments (51) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development

September 1, 2011

GlaxoSmithKline Reviews the Troops

Email This Entry

Posted by Derek

Several readers sent along this article from the Times of London (via the Ottawa Citizen) on GlaxoSmithKline's current research setup. You can tell that the company is trying to get press for this effort, because otherwise these are the sorts of internal arrangements that would never be in the newspapers. (The direct quotes from the various people in the article are also a clear sign that GSK wants the publicity).

The piece details the three-year cycle of the company's Drug Performance Units (DPUs), which have to come and justify their existence at those intervals. We're just now hitting the first three-year review, and as the article says, not all the DPUs are expected to make it through:

In 2008, the company organized its scientists into small teams, some with just a handful of staff, and set them to work on different diseases. At the time, every one of these drug performance units (DPUs) had to plead its case for a slice of Glaxo’s four-billion-pound research and development budget. Three years on and each of the 38 DPUs is having to plead its case for another dollop of funding to 2014. . .

. . .Such a far-reaching overhaul of a fundamental part of the business has proved painful to achieve. Witty said: “If you look across research and development at Glaxo, I would say we are night-and-day different from where we were three, four, five years ago. It has been a tough period of change and challenge for people in the company. When you go through that period, of course there are moments when morale is challenged and people are worried about what will happen.”

But he said it has been worth the upheaval: “The research and development organization has never been healthier in terms of its performance and in terms of its potential.”

I'm not in a position to say whether he's right or not. One problem (mentioned by an executive in the story) is that three years isn't really long enough to say whether things are working out or not. That might give you a read on the number of preclinical projects, whether that seems to be increasing or not. But that number is notoriously easy to jigger around - just lower the bar a bit, and your productivity problem is solved, on paper. The big question is the quality of those compounds and projects, and that takes a lot more time to evaluate. And then there's the problem that the extent that you can actually improve that quality may still not be enough to really affect your clinical failure rates much, anyway, depending on the therapeutic area.

Is this a sound idea, though? It could be - asking projects and therapeutic areas to justify their existence every so often could keep them from going off the rails and motivate them to produce results. Or, on the other hand, it could motivate them to tell management exactly what they want to hear, whether that corresponds to reality or not. All of these tools can cut in both directions, and I've no idea which way the blades are moving at GSK.

There's another consideration that applies to any new management scheme. How long will GSK give this system? How many three-year cycles will be needed to really say if it's effective, and how many will actually be run? Has any big drug company kept its R&D arrangements stable for as long as nine years, say, in recent history?

Comments (35) + TrackBacks (0) | Category: Drug Development | Drug Industry History

August 29, 2011

Chinese Pharma: No Shortage of Ambition, Anyway

Email This Entry

Posted by Derek

When does China take the next step in drug research? They already have a huge contract research industry, and they have branches of many of the major pharma companies. But when does a Chinese startup, doing its own research with its own people in China, develop its own international-level drug pipeline? (We'll leave aside the problem that not even all the traditional drug companies seem to be able to do that these days). It still seems clear that we're eventually going to have a Chinese Merck, or a Chinese Novartis or what have you - a company to join North America, Western Europe, and Japan in the big leagues. The Chinese government, especially, would seem to find this idea very appealing.

Opinions differ, to put it mildly, about how far away this prospect is. But Chemical and Engineering News is out with an article on homegrown Chinese research that explores just this sort of question. But you run into passages like this:

In a meeting room in a building resembling a residential home in Shanghai’s Zhangjiang Hi-Tech Park, Li Chen and John Choi describe the business plan of their new company. Called Hua Medicine, the firm will launch breakthrough drugs within four years, they predict. Hua will manufacture the compounds and sell them with its own sales force. It will also license its internally developed drugs to multinational companies.

Yet right now, Hua is a modest operation that employs eight people. Hua doesn’t have an R&D lab yet, let alone a manufacturing facility. It operates in a loaned building formerly used by the administrators of the industrial park...

It can be easy to dismiss such ambitious business plans as simply talk aimed at gullible investors or government officials handing out subsidies. Except several start-ups are led by people who have long track records of success. Moreover, the money financing these start-ups comes not from relatives and friends, but from savvy investors knowledgeable about the drug industry.

Well. . .yeah. Let me join those who dismiss business plans that are as ambitious as that one. The way I understand the drug industry, if you're planning on launching a breakthrough drug within four years, you must have that drug in your hand right now, and it has to have had a lot of preclinical work done on it already (and in most therapeutic areas, it needs to have already hit the clinic). And note, these guys aren't talking about their one pet compound, they're talking about launching drugs, plural. Drugs that they discover, develop, manufacture and sell. And they have 8 people and no labs.

No, something is off here. I get the same feeling from this that I get from a lot of leapfrog-the-world plans, the feeling that something just isn't quite right and that the world doesn't allow itself to be hopped over on such a deliberate schedule. Thoughts?

Comments (47) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

August 5, 2011

Bernard Munos Rides Again

Email This Entry

Posted by Derek

I've been meaning to link to Matthew Herper's piece on Bernard Munos and his ideas on what's wrong with the drug business. Readers will recall several long discussions here about Munos and his published thoughts (Parts one, two, three and four). A take-home message:

So how can companies avoid tossing away billions on medicines that won’t work? By picking better targets. Munos says the companies that have done best made very big bets in untrammeled areas of pharmacology. . .Munos also showed that mergers—endemic in the industry—don’t fix productivity and may actually hurt it. . . What correlated most with the number of new drugs approved was the total number of companies in the industry. More companies, more successful drugs.

I should note that the last time I saw Munos, he was emphasizing that these big bets need to be in areas where you can get a solid answer in the clinic in the shortest amount of time possible - otherwise, you're really setting yourself up with too much risk. Alzheimer's, for example, is a disease that he was advising that drug developers basically stay away from: tricky unanswered medical questions, tough drug development problems, followed up by big huge long expensive clinical trials. If you're going to jump into a wild, untamed medical area (as he says you should), then pick one where you don't have to spend years in the clinic. (And yes, this would seem to mean a focus on an awful lot of orphan diseases, the way I look at it).

But, as the article goes on to say, the next thought after all this is: why do your researchers need to be in the same building? Or the same site? Or in the same company? Why not spin out the various areas and programs as much as possible, so that as many new ideas get tried out as can be tried? One way to interpret that is "Outsource everything!" which is where a lot of people jump off the bus. But he's not thinking in terms of "Keep lots of central control and make other people do all your grunt work". His take is more radical:

(Munos) points to the Pentagon’s Defense Advanced Research Projects Agency, the innovation engine of the military, which developed GPS, night vision and biosensors with a staff of only 140 people—and vast imagination. What if drug companies acted that way? What areas of medicine might be revolutionized?

DARPA is a very interesting case, which a lot of people have sought to emulate. From what I know of them, their success has indeed been through funding - lightly funding - an awful lot of ideas, and basically giving them just enough money to try to prove their worth before doling out any more. They have not been afraid of going after a lot of things that might be considered "out there", which is to their credit. But neither have they been charged with making money, much less reporting earnings quarterly. I don't really know what the intersection of DARPA and a publicly traded company might look like (the old Bell Labs?), or if that's possible today. If it isn't, so much the worse for us, most likely.

Comments (114) + TrackBacks (0) | Category: Alzheimer's Disease | Business and Markets | Clinical Trials | Drug Development | Drug Industry History | Who Discovers and Why

July 29, 2011

2011 Drug Approvals Are Up: We Rule, Right?

Email This Entry

Posted by Derek

I've been meaning to comment on this article from the Wall Street Journal - the authors take a look at the drug approval numbers so far this year, and speculate that the industry is turning around.

Well, put me in the "not so fast" category. And I have plenty of company there. Neither Bruce Booth (from the venture capital end), John LaMattina (ex-Pfizer R&D head) nor Matthew Herper at Forbes are buying it either.

One of the biggest problems with the WSJ thesis is that most of these drugs have been in development for longer than the authors seem to think. Bruce Booth's post goes over this in detail, and he's surely correct that these drugs were basically all born in the 1990s. Nothing that's changed in the research labs in the last 5 to 10 years is likely to have significantly affected their course; we're going to have to wait several more years to see any effects. (And even then it's unlikely that we're going to get any unambiguous signals; there are too many variables in play). That, as many people have pointed out over the years, is one of the trickiest parts about drug R&D: the timelines are so long and complex that it's very hard to assign cause and effect to any big changes that you make. If your car only responds to the brake pedal and steering wheel a half hour after you touch them, how can you tell if that fancy new GPS you bought is doing you any good?

Comments (8) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Press Coverage | Regulatory Affairs

July 20, 2011

Will Macrocycles Get It Done?

Email This Entry

Posted by Derek

Here's an article from Xconomy on Ensemble Therapeutics, a company that spun off from work in David Liu's lab at Harvard. Their focus these days is on a huge library of macrocyclic compounds (prepared by using DNA tags to bring the reactants together, which is a topic for a whole different post). They're screening against several targets, and with several partners. Why macrocycles?

Well, there's been a persistent belief, with some evidence behind it, that medium- and large-ring compounds are somehow different. Cyclic peptides certainly can be distinguished from their linear counterparts - some of that can be explained by their being unnatural (and poor) substrates for some of the proteases that would normally clear them out, but there can be differences in distribution and cell penetration as well. The great majority of non-peptidic macrocycles that have been studied in biological systems are natural products - plenty of classic antibiotics and the like are large rings. I worked on one for my PhD, although I never quite closed the ring on the sucker.

You can look that that natural product distribution in two ways: one view might be that we have an exaggerated idea of the hit rate of macrocycles, because we've been looking at a bunch of evolutionarily optimized compounds. But the other argument is that macrocycles aren't all that easy to make, therefore evolutionary pressures must have led to so many of them for some good reasons, and we should try to take advantage of the evidence that's in front of us.

What's for sure is that macrocyclic compounds are under-represented in drug industry screening collections, so there's an argument to be made just on that basis. (You do see them once in a while). And the chemical space that they cover is probably not something that other compounds can easily pick up. Large rings are a bit peculiar - they have some conformational flexibility, in most cases, but only within a limited range. So if you're broadly in the right space for hitting a drug target, you probably won't pay as big an entropic penalty when a macrocycle binds. It already had its wings clipped to start with. And as mentioned above, there's evidence that these compounds can do a better job of crossing membranes than you'd guess from their size and functionality. One hope is that these properties will allow molecular weight ranges to be safely pushed up a bit, allowing a better chance for hitting nontraditional targets such as protein-protein interactions.

All this has led to a revival of med-chem interest in the field, so Ensemble is selling their wares at just the right time. One reason that there haven't been so many macrocycles in the screening decks is that they haven't been all that easy to make. But besides Liu's DNA templating, some other interesting synthetic methods have been coming along - the Nobel-worthy olefin metathesis reaction has been recognized for some time as a good entry into the area, and Keith James out at Scripps has been publishing on macrocyclic triazoles via the copper-catalyzed click reaction. Here's a recent review in J. Med. Chem., and here's another. It's going to be interesting to see how this all works out - and it's also a safe bet that this won't be the only neglected and tricky area that we're going to find ourselves paying more attention to. . .

Comments (31) + TrackBacks (0) | Category: Chemical News | Drug Development

July 7, 2011

Phenotypic Screening For the Win

Email This Entry

Posted by Derek

Here's another new article in Nature Reviews Drug Discovery that (for once) isn't titled something like "The Productivity Crisis in Drug Research: Hire Us And We'll Consult Your Problems Away". This one is a look back at where drugs have come from.

Looking over drug approvals (259 of them) between 1999 and 2008, the authors find that phenotypic screens account for a surprising number of the winners. (For those not in the business, a phenotypic screen is one where you give compounds to some cell- or animal-based assay and look for effects. That's in contrast to the target-based approach, where you identify some sort of target as being likely important in a given disease state and set out to find a molecule to affect it. Phenotypic screens were the only kinds around in the old days (before, say, the mid-1970s or thereabouts), but they've been making a comeback - see below!)

Out of the 259 approvals, there were 75 first-in-class drugs and 164 followers (the rest were imaging agents and the like). 100 of the total were discovered using target-based approaches, 58 through phenotypic approaches, and 18 through modifying natural substances. There were also 56 biologics, which were all assigned to the target-based category. But out of the first-in-class small molecules, 28 of them could be assigned to phenotypic assays and only 17 to target-based approaches. Considering how strongly tilted the industry has been toward target-based drug discovery, that's really disproportionate. CNS and infectious disease were the therapeutic areas that benefited the most from phenotypic screening, which makes sense. We really don't understand the targets and mechanisms in the former, and the latter provide what are probably the most straightforward and meaningful phenotypic assays in the whole business. The authors' conclusion:

(this) leads us to propose that a focus on target-based drug discovery, without accounting sufficiently for the MMOA (molecular mechanism of action) of small-molecule first-in-class medicines, could be a technical reason contributing to high attrition rates. Our reasoning for this proposal is that the MMOA is a key factor for the success of all approaches, but is addressed in different ways and at different points in the various approaches. . .

. . .The increased reliance on hypothesis-driven target-based approaches in drug discovery has coincided with the sequencing of the human genome and an apparent belief by some that every target can provide the basis for a drug. As such, research across the pharmaceutical industry as well as academic institutions has increasingly focused on targets, arguably at the expense of the development of preclinical assays that translate more effectively into clinical effects in patients with a specific disease.

I have to say, I agree (and have said so here on the blog before). It's good to see some numbers put to that belief, though. This, in fact, was the reason why I thought that the NIH funding for translational research might be partly spent on new phenotypic approaches. Will we look back on the late 20th century/early 21st as a target-based detour in drug discovery?

Comments (36) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

July 2, 2011

Innovation and Return (Europe vs. the US)

Email This Entry

Posted by Derek

Here's another look at the productivity problems in drug R&D. The authors are looking at attrition rates, development timelines, targets and therapeutic areas, and trying to find some trends to explain (or at least illuminate) what's been going on.

Their take? Attrition rates have been rising at all phases of drug development, and most steeply in Phase III. (This sounds right to me). Here are their charts:
Attrition%20rates.png
And when they look at where the drug R&D efforts have been going, they find that comparatively more time and money has been spent on targets with lower probability of success. That means (among other things) more oncology, Alzheimer's, arthritis, Parkinson's et al. and less cardiovascular and anti-HIV.

That makes sense, too, in a paradoxical way. If we were to get drugs in those areas, the expected returns would be higher than if we found them in the well-established ones. The regulatory barriers would be smaller, the competition thinner, the potential markets are enthusiastic about new therapies - everything's lined up. If you can find a drug, that is. The problem is the higher failure rates. We knew that going in, of course, but the expectation was that the greater rewards would cancel that out. But what if they don't? What if, for a protracted period, there are no rewards at all?

The paper also has a very interesting analysis of European firms versus US ones. Instead of looking at where companies might be headquartered, the authors used the addresses of the inventors on patent filings as a better location indicator. Over 18,000 projects started by companies or public research organizations between 1990 and 2007 were examined, and they found:

Although at a first glance, European organizations seem to have higher success rates compared with US organizations, after controlling for the larger share of biotechnology companies and PROs in the United States and for differences in the composition of R&D portfolios, there is no significant gap between European and US organizations in this respect. Unconditional differences (that is, differences arising when no controls are taken into account) are driven by the higher propensity of US organizations to focus on novel R&D methodologies and riskier therapeutic endeavours. . .as an average US organization takes more risk, when successful, they attain higher price premiums than the European organizations.

The other take-home has to do with "me-too" compounds versus first-in-class ones, and is worth considering:

". . .both private and public payers discourage incremental innovation and investments in follow-on drugs in already established therapeutic classes, mostly by the use of reference pricing schemes and bids designed to maximize the intensity of price competition among different molecules. Indeed, in established markets, innovative patented drugs are often reimbursed at the same level as older drugs. As a consequence, R&D investments tend to focus on new therapeutic targets, which are characterized by high uncertainty and difficulty, but lower expected post-launch competition. Our empirical investigation indicates that this reorienting of investments accounts for most of the recent decline in productivity in pharmaceutical R&D, as measured in terms of attrition rates, development times and the number of NMEs launched."

So, rather than being in trouble for not trying to be innovative enough, according to these guys, we're in trouble for innovating too much. . .

Comments (26) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

June 28, 2011

Drug R&D Spending Now Down (But Look at the History)

Email This Entry

Posted by Derek

I hate to be such a shining beacon of happiness today, but this news can't very well be ignored, can it? For the first time ever, total drug R&D spending seems to have declined:

The global drug industry cut its research spending for the first time ever in 2010, after decades of relentless increases, and the pace of decline looks set to quicken this year.

Overall expenditure on discovering and developing new medicines amounted to an estimated $68 billion last year, down nearly 3 percent on the $70 billion spent in both 2008 and 2009, according to Thomson Reuters data released on Monday.

The fall reflects a growing disillusionment with poor returns on pharmaceutical R&D. Disappointing research productivity is arguably the biggest single factor behind the declining valuations of the sector over the past decade.

This is not good - although, to be sure, we've had plenty of warning that this day would be coming. But looking at it from another perspective, you might wonder what's taken so long. Matthew Herper has a piece up highlighting the chart below, from the Boston Consulting Group. It plots new drugs versus R&D spending in constant dollars, and if you're wondering what the Good Old Days looked like, here they are. Or were:
R%26D%20constant%20dollar%20graph.png
What's most intriguing to me about this graph is the way it seems to validate the "low-hanging fruit" argument. This looks like the course of an industry that has, from the very beginning of its modern era, been finding it steadily, relentlessly harder to mine the ore that it runs on. But that analogy leaves out another key factor that makes that line go down: good drugs don't go away. They just go generic, and get cheaper than ever. You can also interpret this graph as showing the gradual buildup of cheap, effective generics for a number of major conditions (cardiovascular, in particular).

There's one other factor that ties in with those thoughts - the therapeutic areas that we've been able to address. Look at that spike in the 1990s, labeled PDUFA and HIV. Part of that jump is, as a colleague theorized with me just this morning, the fact that a completely new disease appeared. And it was one that, in the end, we could do something about - as opposed to, say, Alzheimer's. So if you want to be completely evil about it, then the Huey Lewis model of fixing pharma has it wrong: we don't need a new drug. We need a new disease. Or several.

Well, that's clearly not the way to look at it. I don't actually think that we need to add to the list of human ailments; it's long enough already. But given all the factors listed (and the ever-tightening regulatory/safety environment, on top of them), another colleague of mine looked at this chart and asked if we ever could have expected it to look any different. Could that line go anywhere else but down? The promise of things like the genomics frenzy was, I think, that it would turn things around (and that hope still lives on in the heart of Francis Collins), even though some people argue that it did the reverse.

Comments (60) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

June 16, 2011

What Translational Research Should Academia Do?

Email This Entry

Posted by Derek

We've talked quite a bit around here about academic (and nonindustrial) drug discovery, but those posts have mostly divided into two parts. There's the early-stage discovery work that really gets done in some places, and then there's the proposal for the big push into translational research by the NIH. That, broadly defined, is (a) the process of turning an interesting idea into a real drug target, or (b) turning an interesting compound into a real drug. One of the things that the recent survey of academic centers made clear, I'd say, is that the latter kind of work is hardly being done at all outside of industry. The former is a bit more common, but still suffers from the general academic bias: walking away too soon in order to move on to the next interesting thing. Both these translational processes involve a lot of laborious detail work, of the kind that does not mint fresh PhDs nor energize the post-docs.

But if there's funding to do it, it'll get done in some fashion, and we can expect to see a lot of people trying their hand at these things. Many universities are all for it, too, since they imagine that there will be some lucrative technology transfers waiting at the end of the process. (One of the remarkable things about the drug industry is how many people outside it see it as the place to get rich).

I had an e-mail from Jonathan Gitlin on this subject, who asks the question: if academia is going to do these things, what should they be doing to keep the money from being wasted? It's definitely worth thinking about, since there are so many drains for the money to go spiraling down. Mind you, most money spent on these things is (in the most immediate sense) wasted, since most ideas for drug targets turn out to be mistaken, and most compounds turn out not to be drugs. No matter what, we're going to have to be braced for that - even strong improvements in both those percentages would still leave us with what (to people with fresh eyes) would seem horrific failure rates.

And what I'd really like is for people to avoid the "translational research fallacy", as I've called it. That's the (seemingly pervasive) idea that there are just all sorts of great ideas for new drugs and new targets just gathering dust on university shelves, waiting for some big drug company to get around to noticing them. That, unfortunately, does not seem to be true, but it's a tempting idea, and I worry that people are going to be unable to resist chasing after it.

But that said, where would be the best place for the academic money to go? I have a few nominees. If we're breaking things down by therapeutic area, one of the most intractable and underserved is central nervous system disease. I note that there's already talk of a funding crisis in this area (although that article is more focused on Europe). It may come as a surprise to people outside medical research, but we still have very little concrete knowledge of what goes on in the brain during depression, schizophrenia, and other illnesses. That, unfortunately, is not for lack of trying. Looked at from the other end, we know vastly more than we used to, but it's still nowhere near enough.

If we're looking at general translational platforms and ideas, then I would suggest trying to come up with solid small-organism models for phenotypic screening. A good phenotypic screen, where you run compounds past a living system to see which ones give you the effects you want, can be a wonderful thing, since it doesn't depend on you having to unravel all the biochemistry behind a disease process. (It can, in fact, reveal biochemistry that you never knew existed). But good screens of this type are rare, outside of the infectious disease area, and are tricky to validate. Everyone would love to have more of them - and if an academic lab can come up with one, then those folks can naturally have first crack at screening a compound collection past them.

More suggestions welcome in the comments - it looks like this is going to happen, so perhaps we can at least seed this newly plowed field with something that we'd like to see when it sprouts.

Comments (26) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development

May 31, 2011

Extreme Outsourcing

Email This Entry

Posted by Derek

My local NPR station had this report on this morning, on one-person drug companies. Can't outsource much more than that!

Here are the two companies profiled: LipimetiX and Deuteria. The former is using helical peptides to affect lipoprotein clearance, and the latter is (as you'd guess) in the deuterated-drug game, which I've most recently blogged on here. (That one's run by Sheila DeWitt, who used to work down the hall from me in grad school 25 years ago). And there are several other outfits that they could have mentioned - some of them are not quite down to one person, but you can count the employees on your fingers. In all of these cases, everything is being contracted out.

There are downsides, of course. For one thing, these are, almost by necessity, single-drug companies. It's enough of a strain just getting one project through under those conditions, let alone running a whole portfolio. So the risk is higher, given the typical failure rates in this line of work. And you have to trust your contractors, naturally. That's a bit easier to do in the Boston area (and a few other places), since you can get a lot of work sourced locally. That doesn't make it as much of a Bargain, Bargain, Bargain as it might be overseas, but at least you can drop in and see how things are going.

Another thing the NPR piece didn't address was where these projects come from. Many of them, I'd guess, are abandoned efforts from other companies that still have some possibilities. Those and the up-from-academia ideas probably take care of the whole list, wouldn't you think? Has anyone heard of one of these virtual-company ideas where the lead compound came from some sort of outsourced screen? And is an outsourced screen even possible? Now there's a business idea. . .

Comments (24) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

May 26, 2011

Pfizer's Brave New Med-Chem World

Email This Entry

Posted by Derek

OK, here's how I understand the way that medicinal chemistry now works at Pfizer. This system has been coming on for quite a while now, and I don't know if it's been fully rolled out in every therapeutic area yet, but this seems to be The Future According to Groton:

Most compounds, and most actual chemistry bench work, is apparently going to be done at WuXi (or perhaps other contract houses?) Back here in the US, there will be a small group of experienced medicinal chemists at the bench, who will presumably be doing the stuff that can't be easily shipped out (time-critical, difficult chemistry, perhaps even IP-critical stuff, one wonders?) But these people are not, as far as I can tell, supposed to have ideas of their own.

No, ideas are for the Drug Designers, which is where the rest of Pfizer's remaining medicinal chemistry head count are to be found. These are the people who keep trac of the SAR, decided what needs to be made next, and tell the folks in China to make it. It's presumably their call, what to send away for and what to do in-house, but one gets the sense that they're strongly encouraged to ship as much stuff out as possible. Cheaper that way, right? And it's not like there's a whole lot of stateside capacity, anyway, at this point.

What if someone working in the lab has (against all odds) their own thoughts about where the chemistry should go next? I presume that they're going to have to go and consult a Drug Designer, thereby to get the official laying-on of hands. That process will probably work smoothly in some cases, but not so smoothly in others, depending on the personalities involved.

So we have one group of chemists that are supposed to be all hands and no head, and one group that's supposed to be all head and no hands. And although that seems to me to be carrying specialization one crucial step too far, well, it apparently doesn't seem that way to Pfizer's management, and they're putting a lot of money down on their convictions.

And what about the whole WuXi/China angle? The bench chemists there are certainly used to keeping their heads down and taking orders, for better or worse, so that won't be any different. But running entire projects outsourced can be a tricky business. You can end up in a situation where you feel as if you're in a car that only allows you to move the steering wheel every twenty minutes or so. Ah, a package has arrived, a big bunch of analogs that aren't so relevant any more, but what the heck. And that last order has to be modified, and fast, because we just got the assay numbers back, and the PK of the para substituted series now looks like it's not reproducing. And we're not sure if that nitrogen at the other end really needs to be modified any more at this point, but that's the chemistry that works, and we need to keep people busy over there, so another series of reductive aminations it is. . .

That's how I'm picturing it, anyway. It doesn't seem like a particularly attractive (or particularly efficient) picture to me, but it will at least appear to spend less money. What comes out the other end, though, we won't know for a few years. And who knows, someone may have changed their mind by then, anyway. . .

Comments (114) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Life in the Drug Labs

May 24, 2011

Maybe It Really Is That Hard?

Email This Entry

Posted by Derek

Here's an interesting note from the Wall Street Journal's Health Blog. I can't summarize it any better than they have:

"When former NIH head Elias Zerhouni ran the $30 billion federal research institute, he pushed for so-called translational research in which findings from basic lab research would be used to develop medicines and other applications that would help patients directly.

Now the head of R&D at French drug maker Sanofi, Zerhouni says that such “bench to bedside” research is more difficult than he thought."

And all across the industry, people are muttering "Do tell!" In fairness to Zerhouni, he was, in all likelihood, living in sort of a bubble at NIH. There probably weren't many people around him who'd ever actually done this sort of work, and unless you have, it's hard to picture just how tricky it is.

Zerhouuni is now pushing what he calls an "open innovation" model for Sanofi-Aventis. The details of this are a bit hazy, but it involves:

". . .looking for new research and ideas both internally and externally — for example, at universities and hospitals. In addition, the company is focusing on first understanding a disease and then figuring out what tools might be effective in treating it, rather than identifying a potential tool first and then looking for a disease area in which it could be helpful."

Well, I don't expect to see Sanofi's whole strategy laid out in the press, but that one doesn't even sound as impressive as it sounds. The "first understanding a disease" part sounds like what Novartis has been saying for some time now - and honestly, it really is one of the things that we need, but that understanding is painfully slow to dawn. Look at, oh, Alzheimer's, to pick one of those huge unmet medical needs that we'd really like to address in this business.

With a lot of these things, if you're going to first really understand them, you could have a couple of decades' wait on your hands, and that's if things go well. More likely, you'll end up doing what we've been doing: taking your best shot with what's known at the moment and hoping that you got something right. Which leads us to the success rates we have now.

On the other hand, maybe Zerhouni should just call up Marcia Angell or Donald Light, so that they can set him straight on the real costs of drug R&D. Why should we listen to a former head of the NIH who's now running a major industrial research department, when we can go to the folks who really know what they're talking about, right? And I'd also like to know what he thinks of Francis Collins' plan for a new NIH translational research institute, too, but we may not get to hear about that. . .

Comments (34) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Drug Industry History

May 17, 2011

Imperfect Pitch

Email This Entry

Posted by Derek

Venture capitalist Bruce Booth has moved his blog over to the Forbes network, and in his latest post he has some solid advice for people who are preparing to pitch him (and people like him) some ideas for a new company. It's very sensible stuff, including the need to bring as much solid data as you can possibly bring, not to spend too much time talking about how great everyone on your team is, and not to set off the hype detectors. (Believe it, everyone who's dealt with early-stage biotech and pharma has a very sensitive, broad-spectrum hype detector, and the "off" switch stopped working a long time ago).

He also has some advice that might surprise people who haven't been watching the startup industry over the last few years: "Unless you are really convinced you have a special story that Wall Street will love, please don’t use that three-letter word synonymous with so much value destruction: I-P-O." That's the state of things these days, for better or worse - the preferred exit strategy is to do a good-sized deal with a larger company, and most likely to be bought outright.

And this is advice that I wish that more seminar speakers would follow, not just folks pitching a company proposal:

It's annoying when an entrepreneur touting a discovery-stage cancer program has multiple slides on how big the market is for cancer drugs, what the sales of Avastin were last year, what the annual incidence of the big four cancers are, etc… These slides give me a huge urge to reach for my Blackberry. We know cancer is huge. Unless you’ve got a particular angle on a disease or market that’s unique or unappreciated, don’t bother wasting time on the macro metrics of these diseases, especially when you’re in drug discovery.

Yes indeed, and that goes for anyone who's talking outside the range of their expertise. If you're giving a talk, it should be on something that you know a lot about - more than your audience, right? So why do we have to sit through so many chemists talking about molecular biology, molecular biologists talking about market size, and so on? My rule on that stuff is to hold it down to one slide if possible, and to skip through it lightly even then. I've even seen candidates come in for an interview and spend precious time, time that could be spent showing what they can do and why they should be hired, on telling everyone things that they already know and don't care to hear again.

Comments (24) + TrackBacks (0) | Category: Business and Markets | Drug Development | How To Get a Pharma Job

May 13, 2011

Process Chemistry Makes the Headlines

Email This Entry

Posted by Derek

Not a common occurrence, that. But this Wall Street Journal article goes into details on some efforts to improve the synthetic route to Viread (tenofovir) (or, to be more specific, TDF, the prodrug form of it, which is how it's dosed). This is being funded by former president Bill Clinton's health care foundation:

The chasm between the need for the drugs and the available funding has spurred wide-ranging efforts to bring down the cost of antiretrovirals, from persuading drug makers to share patents of antiretrovirals to conducting trials using lower doses of existing drugs.

Beginning in 2005, the Clinton team saw a possible path in the laboratory to lowering the price of the drugs. Mr. Clinton's foundation had brokered discounts on first-line AIDS drugs, many of which were older and used relatively simple chemistry. Newer drugs, with advantages such as fewer side effects, were more complex and costly to make. . .A particularly difficult step in the manufacture of the antiretroviral drug tenofovir comes near the end. The mixture at that point is "like oatmeal, making it very difficult to stir," explained Prof. Fortunak. That slows the next reaction, a problem because the substance that will become the drug is highly unstable and decomposing, sharply lowering the yield.

Fortunak himself is a former Abbott researcher, now at Howard University. One of his students does seem to have improved that step, thinning out the reaction mixture (which was gunking up with triethylammonium salts) and improving the stability of the compound in it. (Here's the publication on this work, which highlights that step, formation of a phosphate ester, which is greatly enhanced with addition of tetrabutylammonium bromide). This review has more on production of TDF and other antiretrovirals.

This is a pure, 100% real-world process chemistry problem, as the readers here who do it for a living will confirm, and it's very nice to see this kind of work get the publicity that it deserves. People who've never synthesized or (especially) manufactured a drug generally don't realize what a tricky business it can be. The chemistry has to work on large scale (above all!), and do so reproducibly, hitting the mark every time using the least hazardous reagents possible, which have to be reliably sourced at reasonable prices. And physically, the route has to avoid extremes of temperature or pressure, with mixtures that can be stirred, pumped from reactor to reactor, filtered, and purified without recourse to the expensive techniques that those of us in the discovery labs use routinely. Oh, and the whole process has to produce the least objectionable waste stream that you can come up with, too, in case you've got all those other factors worked out already. Not an easy problem, in most cases, and I wish that some of those people who think that drug companies don't do any research of their own would come down and see how it's done.

To give you an example of these problems, the paper on this tenofovir work mentions that the phosphate alkylation seems to work best with magnesium t-butoxide, but that the yield varies from batch to batch, depending on the supplier. And in the workup to that reaction, you can lose product in the cake of magnesium salts that have to be filtered out, a problem that needs attention on scale.

According to the article, an Indian generic company is using the Howard route for tenofovir that's being sold in South Africa. (Tenofovir is not under patent protection in India). Interestingly, two of the big generic outfits (Mylan and Cipla) say that they'd already made their own improvements to the process, but the question of why that didn't bring down the price already is not explored. Did the Clinton foundation improve a published Gilead route that someone else had already fixed? Cipla apparently does the same phosphate alkylation (PDF), but the only patent filing of theirs that I can find that addresses tenofovir production is this one, on its crystalline form. Trade secret?

Comments (21) + TrackBacks (0) | Category: Chemical News | Drug Development | Drug Prices | Infectious Diseases

May 9, 2011

What Medicinal Chemists Really Make

Email This Entry

Posted by Derek

Chemists who don't (or don't yet) work in drug discovery often wonder just what sort of chemistry we do over here. There are a lot of jokes about methyl-ethyl-butyl-futile, which have a bit of an edge to them for people just coming out of a big-deal total synthesis group in academia. They wonder if they're really setting themselves up for a yawn-inducing lab career of Suzuki couplings and amide formation, gradually becoming leery of anything that takes more than three steps to make.

Well, now there's some hard data on that topic. The authors took the combined publication output from their company, Pfizer, and GSK, as published in the Journal of Medicinal Chemistry, Bioorganic Med Chem Letters and Bioorganic and Medicinal Chemistry, starting in 2008. And they analyzed this set for what kinds of reactions were used, how long the synthetic routes were, and what kinds of compounds were produced. Their motivation?

. . .discussions with other chemists have revealed that many of our drug discovery colleagues outside the synthetic community perceive our syntheses to consist of typically six steps, predominantly composed of amine deprotections to facilitate amide formation reactions and Suzuki couplings to produce biaryl derivatives. These “typical” syntheses invariably result in large, flat, achiral derivatives destined for screening cascades. We believed these statements to be misconceptions, or at the very least exaggerations, but noted there was little if any hard evidence in the literature to support our case.

Six steps? You must really want those compounds, eh? At any rate, their data set ended up with about 7300 reactions and about 3600 compounds. And some clear trends showed up. For example, nearly half the reactions involved forming carbon-heteroatom bonds, with half of those (22% of the total) being acylations. mostly amide formation. But only about one tenth of the reactions were C-C bond-forming steps (40% of those were Suzuki-style couplings and 18% were Sonogoshira reactions). One-fifth were protecting group manipulations (almost entirely on COOH and amine groups), and eight per cent were heterocycle formation, and everything else was well down into the single digits.

There are some interesting trends in those other reactions, though. Reduction reactions are much more common than oxidations - the frequency of nitro-to-amine reductions is one factor behind that, followed by other groups down to amines (few of these are typically run in the other direction). Among those oxidations, alcohol-to-aldehyde is the favorite. Outside of changes in reduction state, alcohol-to-halide is the single most favorite functional group transformation, followed by acid to acid chloride, both of which make sense from their reactivity in later steps.

Overall, the single biggest reaction is. . .N-acylation to an amide. So that part of the stereotype is true. At the bottom of the list, with only one reaction apiece, were N-alkylation of an aniline, benzylic/allylic oxidation, and alkene oxidation. Sulfonation, nitration, and the Heck reaction were just barely represented as well.

Analyzing the compounds instead of the reactions, they found that 99% of the compounds contained at least one aromatic ring (with almost 40% showing an aryl-aryl linkage) and over half have an amide, which totals aren't going to do much to dispel the stereotypes, either. The most popular heteroaromatic ring is pyridine, followed by pyrimidine and then the most popular of the five-membered ones, pyrazole. 43% have an aliphatic amine, which I can well believe (in fact, I'm surprised that it's not even higher). Most of those are tertiary amines, and the most-represented of those are pyrrolidines, followed closely by piperazines.

In other functionality, about a third of the compounds have at least one fluorine atom in them, and 30% have an aryl chloride. In contrast to the amides, there are only about 10% of the compounds with sulfonamides. 35% have an aryl ether (mostly methoxy), 10% have an aliphatic alcohol (versus only 5% with a phenol). The least-represented functional groups (of the ones that show up at all!) are carbonate, sulfoxide, alkyl chloride, and aryl nitro, followed by amidines and thiols. There's not a single alkyl bromide or aliphatic nitro in the bunch.

The last part of the paper looks at synthetic complexity. About 3000 of the compounds were part of traceable synthetic schemes, and most of these were 3 and 4 steps long. (The distribution has a pretty long tail, though, going out past 10 steps). Molecular weights tend to peak at between 350 and 550, and clogP peaks at around 3.5 to 5. These all sound pretty plausible to me.

Now that we've got a reasonable med-chem snapshot, though, what does it tell us? I'm going to use a whole different post to go into that, but I think that my take-away was that, for the most part, we have a pretty accurate mental picture of the sorts of compounds we make. But is that a good picture, or not?

Comments (24) + TrackBacks (0) | Category: Chemical News | Drug Development | Life in the Drug Labs | The Scientific Literature

May 5, 2011

Translation Needed

Email This Entry

Posted by Derek

The "Opinionator" blog at the New York Times is trying here, but there's something not quite right. David Bornstein, in fact, gets off on the wrong foot entirely with this opening:

Consider two numbers: 800,000 and 21.

The first is the number of medical research papers that were published in 2008. The second is the number of new drugs that were approved by the Food and Drug Administration last year.

That’s an ocean of research producing treatments by the drop. Indeed, in recent decades, one of the most sobering realities in the field of biomedical research has been the fact that, despite significant increases in funding — as well as extraordinary advances in things like genomics, computerized molecular modeling, and drug screening and synthesization — the number of new treatments for illnesses that make it to market each year has flatlined at historically low levels.

Now, "synthesization" appears to be a new word, and it's not one that we've been waiting for, either. "Synthesis" is what we call it in the labs; I've never heard of synthesization in my life, and hope never to again. That's a minor point, perhaps, but it's an immediate giveaway that this piece is being written by someone who knows nothing about their chosen topic. How far would you keep reading an article that talked about mental health and psychosization? A sermon on the Book of Genesization? Right.

The point about drug approvals being flat is correct, of course, although not exactly news by now, But comparing it to the total number of medical papers published that same year is bizarre. Many of these papers have no bearing on the discovery of drugs, not even potentially. Even if you wanted to make such a comparison, you'd want to run the clock back at least twelve years to find the papers that might have influenced the current crop of drug approvals. All in all, it's a lurching start.

Things pick up a bit when Bornstein starts focusing on the Myelin Repair Foundation as an example of current ways to change drug discovery. (Perhaps it's just because he starts relaying information directly that he's been given?) The MRF is an interesting organization that's obviously working on a very tough problem - having tried to make neurons grow and repair themselves more than once in my career, I can testify that it's most definitely nontrivial. And the article tries to make a big distinction between they way that they're funding research as opposed to the "traditional NIH way".

The primary mechanism for getting funding for biomedical research is to write a grant proposal and submit it to the N.I.H. or a large foundation. Proposals are reviewed by scientists, who decide which ones are most likely to produce novel discoveries. Only a fraction get funded and there is little encouragement for investigators to coordinate research with other laboratories. Discoveries are kept quiet until they are published in peer-reviewed journals, so other scientists learn about them only after a delay of years. In theory, once findings are published, they will be picked up by pharmaceutical companies. In practice, that doesn’t happen nearly as often as it should.

Now we're back to what I'm starting to think of as the "translational research fallacy". I wrote about that here; it's the belief that there are all kinds of great ideas and leads in drug discovery that are sitting on the shelf, because no one in the industry has bothered to take a look. And while it's true that some things do slip past, I'm really not sure that I can buy into this whole worldview. My belief is that many of these things are not as immediately actionable as their academic discoverers believe them to be, for one thing. (And as for the ones that clearly are, those are worth starting a company around, right?) There's also the problem that not all of these discoveries can even be reproduced.

Bornstein's article does get it right about this topic, though:

What’s missing? For a discovery to reach the threshold where a pharmaceutical company will move it forward what’s needed is called “translational” research — research that validates targets and reduces the risk. This involves things like replicating and standardizing studies, testing chemicals (potentially millions) against targets, and if something produces a desired reaction, modifying compounds or varying concentration levels to balance efficacy and safety (usually in rats). It is repetitive, time consuming work — often described as “grunt work.” It’s vital for developing cures, but it’s not the kind of research that will advance the career of a young scientist in a university setting.

“Pure science is what you’re rewarded for,” notes Dr. Barres. “That’s what you get promoted for. That’s what they give the Nobel Prizes for. And yet developing a drug is a hundred times harder than getting a Nobel Prize. . .

That kind of research is what a lot of us spend all our days doing, and there's plenty of work to fill them. As for developing a drug being harder than getting a Nobel Prize, well, apples and oranges, but there's something to it, still. The drug will cost you a lot more money along the way, but with the potential of making a lot more at the end. Bornstein's article goes off the rails again, though, when he says that companies are reluctant to go into this kind of work when someone else owns the IP rights. That's technically true, but overall, the Bayh-Dole Act on commercialization of academic research (despite complications) has brought many more discoveries to light than it's hindered, I'd say. And he's also off base about how this is the reason that drug companies make "me too" compounds. No, it's not because we don't have enough ideas to work on, unfortunately. It's because most of them (and more over the years) don't go anywhere.

Bornstein's going to do a follow-up piece focusing more on the Myelin Repair people, so I'll revisit the topic then. What I'm seeing so far is an earnest, well-meaning attempt to figure out what's going on with drug discovery - but it's not a topic that admits of many easy answers. That's a problem for journalists, and a problem for those of us who do it, too.

Comments (26) + TrackBacks (0) | Category: "Me Too" Drugs | Academia (vs. Industry) | Drug Development | Who Discovers and Why

April 27, 2011

Off the Beaten Track. Way, Way, Off.

Email This Entry

Posted by Derek

Now here's a structure that you don't see every day. A company called RadioRx is developing compounds as radiotherapy sensitizers for oncology, designed to release reactive free radicals and intensify the cell-killing effects of ionizing radiation. And these compounds are not from the usual sources. As they put it:

In collaboration with a major defense contractor, RadioRx is developing its first lead candidate, RRx-001, a best-in-class small molecule, adapted from an energetic solid rocket propellant. The development candidate is scheduled to enter first-in-man phase 1 clinical studies by Q1 2011.

I've been forwarded a report that this is the structure of their compound, which would make their defense-contractor partner Thiokol (the assignee where that compound appears in the patent literature). (Here's one of RadioRx's own patents in this area). And I truly have to salute these guys for going forward with such an out-there structure. Can anyone doubt that this is the first gem-dinitroazetidine to reach the clinic? And with a bromoamide on the other end of it, yet?
dinitro.png
It's easy to look at something like this and mutter "Only in oncology", but at the same time, it takes some nerve and imagination to go forward with compounds this odd. I hope that they work - and I hope that everyone else looks at their own chemical matter and decides that hey, maybe there's more to life than Suzuki couplings and benzo-fused heterocycles.

Comments (27) + TrackBacks (0) | Category: Cancer | Drug Development

April 11, 2011

R&D Is For Losers?

Email This Entry

Posted by Derek

Now here's a piece that I'm looking for good reasons to dismiss. And I think its author, Jim Edwards, wouldn't mind some, too. You've probably heard that Valeant Pharmaceuticals is making a hostile offer for Cephalon, a company that's dealing with some pipeline/patent problems (and, not insignificantly, the recent death of their founder and CEO).

Valeant's CEO, very much alive, is making no secret of his business plan for Cephalon should he prevail: ditch R&D as quickly as possible:

“His approach isn’t one that most executives in the drug business take,” (analyst Timothy) Chiang said in telephone interview last week. “He’s even said in past presentations: ‘We’re not into high science R&D; we’re into making money.’ I think that’s why Valeant sort of trades in a league of its own.”

. . .Pearson’s strategy and viewpoint on research costs have been consistent. When he combined Valeant with drugmaker Biovail Corp. in September, he cut about 25 percent of the workforce, sliced research spending and established a performance-based pay model tied to Valeant’s market value.

“I recognize that many of you did not sign up for either this strategy or operating philosophy,” Pearson wrote in a letter to staff at the time. “Many of you may choose not to continue to work for the new Valeant.”

Valeant does, in fact, make plenty of money. But my first thought (and the first thought of many of you, no doubt) is that it's making money because other people are willing to do the R&D that they themselves are taking a pass on. In other words, there's room for a few Valeants in the industry, but you couldn't run the whole thing that way, because pretty soon there'd be nothing for those whip-cracking revenue-maximizing managers to sell. Would there?

But we don't have to go quite that far. Edwards, for his part, goes on to wonder (as many have) whether the drug industry should settle out into two groups: the people that do the R&D and the people that sell the drugs. This idea has been proposed as a matter of explicit government policy (a nonstarter), but short of that, has been kicked around many times. Most of the time, this scheme involves smaller companies doing the research, with the big ones turning into the regulatory/sales engines, but maybe not:

If you agree that there ought to be a division of labor in the pharma business — that some companies should develop drugs and then sell those products to the companies that have the salesforces to market them — then this says some interesting things about recent corporate strategy moves among the largest companies. Pfizer (PFE) is downsizing its R&D operations and Johnson & Johnson (JNJ) is said to be on the prowl for a ~$10 billion acquisition.

Merck, on the other hand, is doubling down on its own research and stopped giving Wall Street guidance in hopes of lessening the scrutiny paid to its R&D expense base.

.

The heralds of this restructuring of the industry haven't quite called it this way, but instead splitting from each other, perhaps the big companies will divide into two camps (Merck vs. Pfizer) and the smaller ones, too (Valeant vs. your typical small pharma). Prophecy's not an exact science - Marx thought that Germany and England would be the first countries to go Communist, you know.

For my part, I think that there are game-theory reasons why a big company won't explicitly renounce R&D. As it is, a big company can signal that "Yes, we'd like to do a deal for your drug (or your whole company), but you know, there are other things for us to do with the money if this doesn't work out." But if you're only inlicensing, then no, there aren't so many other things for you to do with the money. Everyone else can look around the industry and see what's available for you to buy, and thus the price of your deals goes up. You have no hidden cards from your internal R&D to play (or to at least pretend like you're holding). This signaling, by the way, is directed to the current and potential shareholders as well: "Buy our stock, because you never know what our brilliant people are going to come up with next". That's a more interesting come-on line than "Buy our stock. You never know who we're going to buy next." Isn't it?

And that's a separate question from the even bigger one of whether there are enough compounds out there to inlicense in the first place. No, I think that big companies will hold onto their own R&D in one form or another. But we'll see who's right.

Comments (47) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

March 31, 2011

Your Comments on the NIH's CNS Drug Program?

Email This Entry

Posted by Derek

After my post the other day on the NIH neurological disease effort, I heard from Rebecca Farkas there, who's leading the medicinal chemistry effort on the program. She's glad to get feedback from people in the industry, and in fact is inviting questions and comments on the whole program. Contact her at farkasr-at-ninds-dot-nih-dotgov (perhaps putting the address in that form will give the spam filters at NIH a bit less to do than otherwise).

She also sends word that they'll be advertising soon for a Project Manager position for this effort, and is looking for suggestions on how to reach the right audience for a good selection of candidates. This post might help a bit, but she's interesting in suggestions on where to advertise and who to contact for good leads.

Comments (2) + TrackBacks (0) | Category: Drug Development | The Central Nervous System

March 29, 2011

The NIH Goes For the Gusto

Email This Entry

Posted by Derek

Here's an interesting funding opportunity from NIH:

Recent advances in neuroscience offer unprecedented opportunities to discover new treatments for nervous system disorders. However, most promising compounds identified through basic research are not sufficiently drug-like for human testing. Before a new chemical entity can be tested in a clinical setting, it must undergo a process of chemical optimization to improve potency, selectivity, and drug-likeness, followed by pre-clinical safety testing to meet the standards set by the Food and Drug Administration (FDA) for clinical testing. These activities are largely the domain of the pharmaceutical industry and contract research organizations, and the necessary expertise and resources are not commonly available to academic researchers.

To enable drug development by the neuroscience community, the NIH Blueprint for Neuroscience Research is establishing a ‘virtual pharma’ network of contract service providers and consultants with extensive industry experience. This Funding Opportunity Announcement (FOA) is soliciting applications for U01 cooperative agreement awards from investigators with small molecule compounds that could be developed into clinical candidates within this network. This program intends to develop drugs from medicinal chemistry optimization through Phase I clinical testing and facilitate industry partnerships for their subsequent development. By initiating development of up to 20 new small-molecule compounds over two years (seven projects were launched in 2011), we anticipate that approximately four compounds will enter Phase 1 clinical trials within this program.

My first thought is that I'd like to e-mail that first paragraph to Marcia Angell and to all the people who keep telling me that NIH discovers most of the drugs on the market. (And as crazy as that sounds, I still keep running into people who are convinced that that's one of those established facts that Everyone Knows). My second thought is that this is worth doing, especially for targeting small or unusual diseases. There could well be interesting chemical matter or assay ideas floating around out there, looking for the proper environment to have something made of them.

My third thought, though, is that this could well end up being a real education for some of the participants. Four Phase I compounds out of twenty development candidates - it's hard to say if that's optimistic or not, because the criteria for something to be considered a development candidate can be slippery. And that goes for the drug industry too, I hasten to add. Different organizations have different ideas about what kinds of compounds are worth taking to the clinic, and those criteria vary by disease area, too. (Sad to say, they can also vary by time of the year and the degree to which bonuses are tied to hitting number-of-clinical-candidate goals, and anyone who's been around the business a while will have seen that happen, to their regret).

It'll be interesting to see how many people apply for this; the criteria look pretty steep to me:

Applicants must have available small-molecule compounds with strong evidence of disease-related activity and the potential for optimization through iterative medicinal chemistry. Applicants must also be able to conduct bioactivity and efficacy testing to assess compounds synthesized in the development process and provide all pre-clinical validation for the desired disease indication. . .This initiative is not intended to support development of new bioactivity assays, thus the applicant must have in hand well-characterized assays and models.

Hey, there are small companies out there that don't come up to that standard. To clarify, though, the document does say that "Evaluation of the approach should focus primarily on the rationale and strengths/weaknesses of proposed bioactivity studies and compound "druggability," since all other drug development work (e.g., medicinal chemistry, PK/tox, phase I clinical testing) will be designed and implemented by NIH-provided consultants and contractors after award", which must come as something of a relief.

What's interesting to me, though, is that the earlier version of this RFA (from lsat year) had the following language:

The ultimate goals of this Neurotherapeutics Grand Challenge are to produce at least one novel and effective drug for a nervous system disorder that is currently poorly treated and to catalyze industry interest in novel disease targets by demonstrating early-stage success.

That's missing this time around, which is a good thing. If they're really hoping for a drug to come out of four Phase I candidates in poorly-treated CNS disorders, then I'd advise them to keep that thought well hidden. The overall attrition rate in the clinic in CNS is somewhere around (and maybe north of) 90%, and if you're going to go after the tough end of that field it's going to be even steeper.

Comments (7) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | The Central Nervous System

March 28, 2011

Value in Structure?

Email This Entry

Posted by Derek

A friend on the computational/structural side of the business sent along this article from Nature Reviews Drug Discovery. The authors are looking through the Thomson database at drug targets that are the subject of active research in the industry, and comparing the ones that have structural information available to the ones that don't: enzyme targets (with high-resolution structures) and and GPCRs without it. They're trying to to see if structural data is worth enough to show up in the success rates (and thus the valuations) of the resulting projects.

Overall, the Thomson database has over a thousand projects in it from these two groups, a bit over 600 from the structure-enabled enzymes and just under 500 GPCR projects. What they found was that 70% of the projects in the GPCR category were listed as "suspended" or "discontinued", but only 44% of the enzyme projects were so listed. In order to correct for probability of success across different targets, the authors picked ten targets from each group that have led, overall, to similar numbers of launched drugs. Looking at the progress of the two groups, the structure-enabled projects are again lower in the "stopped" categories, with corresponding increases in discovery and the various clinical phases.

You have to go to the supplementary info for the targets themselves, but here they are: for the enzymes, it's DPP-IV, BCR-ABL, HER2 kinase, renin, Factor Xa, HDAC, HIV integrase, JAK2, Hep C protease, and cathepsin K. For the receptor projects, the list is endothelin A receptor, P2Y12, CXCR4, angiogensin II receptor, sphingosine-1-phosphate receptor, NK1, muscarinic M1, vasopressin V2, melatonin receptor, and adenosine A2A.

Looking over these, though, I think that the situation is more complicated than the authors have presented. For example, DPP-IV has good structural information now, but that's not how people got into the area. The cyanopyrrolidine class of inhibitors, which really jump-started the field, were made by analogy to a reported class of prolyl endopeptidase inhibitors (BOMCL 1996, p. 1163). Three years later, the most well-characterized Novartis compound in the series was being studied by classic enzymology techniques, because it still wasn't possible to say just how it was binding. But even more to the point, this is a well-trodden area now. Any DPP-IV project that's going on now is piggybacking not only on structural information, but on an awful lot of known SAR and toxicology.

And look at renin. That's been a target forever, structure or not. And it's safe to say that it wasn't lack of structural information that was holding the area back, nor was it the presence of it that got a compound finally through the clinic. You can say the same things about Factor Xa. The target was validated by naturally occurring peptides, and developed in various series by classical SAR. The X-ray structure of one of the first solid drug candidates in the area (rivaroxaban) bound to its target, came after the compound had been identified and the SAR had been optimized. Factor Xa efforts going on now also are standing on the shoulders of an awful lot of work.

In the case of histone deacetylase, the first launched drug in that category (SAHA, vorinostat) has already been identified before any sort of X-ray structure was available. Overall, that target is an interesting addition to the list, since there are actually a whole series of them, some of which have structural information and some of which don't. The big difficulty in that area is that we don't really know what the various roles of the different isoforms are, and thus how the profiles of different compounds might translate to the clinic, so I wouldn't say that structural data is helping with the rate-determining steps in the field.

On the receptor side, I also wouldn't say that it's lack of structural information that's necessarily holding things back in all of those cases, either. Take muscarinic M1 - muscarinic ligands have been known for a zillion years. That encompasses fairly selective antagonists, and hardly-selective-at-all agonists, so I'm not sure which class the authors intended. If they're talking about antagonists, then there are plenty already known. And if they're talking about agonists, I doubt that even detailed structural information would help, given the size of the native ligand (acetylcholine).

And the vasopressin V2 case is similar to some of the enzyme ones, in that there's already an approved drug in this category (tolvaptan), with several others in the same structural class chasing it. Then you have the adenosine A2A field, where long lists of agonists and antagonists have been found over the years, structure or not. The problem there has been finding a clinical use for them; all sorts of indications have been chased over the years, a problem that structural information would have not helped with in the least.

Now, it's true that there are projects in these categories where structure has helped out quite a bit, and it's also true that detailed GPCR structures would be welcome (and are slowly coming along, for that matter). I'm not denying either of those. But what does strike me is that there are so many confounding variables in this particular comparison, especially among the specific targets that are the subject of the article's featured graphic, that I just don't think that its conclusions follow.

Comments (32) + TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico

March 24, 2011

More on KV and Makena's Pricing

Email This Entry

Posted by Derek

I wanted to do some follow-up on the Makena story - the longtime progesterone ester drug that has now been newly FDA-approved and newly made two order of magnitude more expensive. (That earlier post has the details, for those who might not have been following).

Steve Usdin at BioCentury has, in the newsletter's March 21st issue, gone into some more detail about the whole process where KV Pharmaceuticals stepped in under the Orphan Drug Act to pick up exclusive marketing rights to the drug. The company, he says, "arguably has played a marginal role" in getting the drug back onto the market.

Here's the timeline, from that article and some digging around of my own: in 1956, Squibb got FDA approval for the exact compound (progesterone caproate) for the exact indication (preventing preterm labor), under the brand name Delalutin. But at that time, the FDA didn't require proof of efficacy, just safety. There were several small, inconclusive academic studies during the 1960s. In 1971, the FDA noted that the drug was effective for abnormal uterine bleeding and other indications, and was "probably effective" for preventing preterm delivery. In 1973, though, based on further data from the company, the agency went back on that statement, and said that there was now evidence of birth defects from the use of Delalutin in pregnant women, and removed any of these as approved uses. In the late 1970s, warning language was further added. In 1989, the agency said that its earlier concerns (heart and limb defects) were unfounded, but warned of others. By 1999, the FDA had concluded that progesterone drugs were too varied in their effects to be covered under a single set of warnings, and took the warning labels off.

In 1998, the National Institute of Child Health and Human Development launched a larger, controlled study, but this was an example of bad coordination all the way. By this time, Bristol-Myers Squibb had requested that Delalutin's NDAs be revoked, saying that they hadn't even sold the compound for several years. This seems to have also been a move, though, in response to FDA complaints about earlier violations of manufacturing guidelines and a request to recall the outstanding stocks of the drug. So the NICHD study was terminated after a year, with no results, and the drug's NDA was revoked as of September, 2000.

The NICHD had started another study by then, however, although I'm not sure how they solved their supply problems. This is the one that reported data in 2003, and showed a real statistical benefit for preterm labor. More physicians began to prescribe the drug, and in 2008, the American College of Obstetricians and Gynecologists recommended its use.

So much for the medical efficacy side of the story. Now we get back to the regulatory and marketing end of things. In March of 2006, a company called CUSTOpharm asked the FDA to determine if the drug had been withdrawn for reasons of safety or efficacy - basically, was it something that could be resubmitted as an ANDA? The agency determined that the compound was so eligible.

Meanwhile, another company called Adeza Biomedical was moving in the same direction (as far as I can tell, they and CUSTOpharm had nothing to do with each other, but I don't have all the details). Adeza submitted an NDA in July 2006, under the FDA's provision for using data that that applicant had not generated - in fact, they used the NICHD study results. They called the compound Gestiva, and asked for accelerated approval, since preterm delivery was accepted as a surrogate for infant mortality. An advisory committee recommended this in August of 2006, by a 12 to 9 vote. (Scroll down to the bottom of this page for the details).

The agency sent Adeza an "approvable" letter in October 2006 which asked for more animal studies. The next year, Adeza was bought by Cytec, who were bought by Hologic, who sold the Gestiva rights to KV Pharmaceuticals in January 2008. So that's how KV enters the story: they bought the drug program from someone who bought it from someone who just used a government agency's clinical data.

The NDA was approved by the FDA in February 2011, along with a name change to Makena. By this time, KV and Hologic had modified their agreement - KV had already paid up nearly $80 million, with another $12.5 million due with the approval, and has further payments to make to Hologic which would take the total purchase price up to nearly $200 million. That's been their main expense for the drug, by far. The FDA has asked them to continue two ongoing studies of Makena - one placebo-controlled trial to look at neonatal mortality and morbidity, and one observational study to see if there are any later developmental effects. Those studies will report in late 2016, and KV has said that their costs will be in the "tens of millions". So they paid more for the rights to Makena than it's costing them to get it studied in the clinic.

That only makes sense if they can charge a lot more than the generic price for the drug had been, of course, and that's what takes us up to today, with the uproar over the company's proposed price tag of $1500 per treatment. But the St. Louis Post-Dispatch (thanks to FiercePharma for the link) says that the company has now filed its latest 10-Q with the SEC, and is notifying investors that its pricing plans are in doubt:

The success of the Company’s commercialization of Makena™ is dependent upon a number of factors, including: (i) the Company’s ability to maintain certain net pricing levels for Makena™; (ii) successfully obtaining agreements for coverage and reimbursement rates on behalf of patients and medical practitioners prescribing Makena™ with third-party payors, including government authorities, private health insurers and other organizations, such as HMOs, insurance companies, and Medicaid programs and administrators, and (iii) the extent to which pharmaceutical compounders continue to produce non-FDA approved purported substitute product. The Company has been criticized regarding the list pricing of Makena™ in a number of news articles and internet postings. In addition, the Company has received, and expects to continue to receive, letters criticizing the Company’s list pricing of Makena™ from several medical practitioners and several advocacy groups, including the March of Dimes, American College of Obstetricians and Gynecologists, American Academy of Pediatrics and the Society for Maternal Fetal Medicine. Further, the Company has received one letter from a United States Senator and expect to receive another letter from a number of members of the United States Congress asking the Company to reduce its indicated pricing of Makena™, and the same Senator, together with a second Senator, has sent a letter to the Federal Trade Commission asking the agency to initiate an investigation of our pricing of Makena™.

The Company is responding to these criticisms and events in a number of respects. . .The success of the Company is largely dependent upon these efforts and appropriately responding to both the media and governmental concerns regarding the pricing of Makena™.

Personally, I'm torn a bit by the whole situation. I think that people and companies have the right to charge what the market will bear for their goods and services. But at the same time, I find myself also very irritated by KV in this case, because I truly think that they are taking advantage of the regulatory framework. As I said in the last post, it's not like they took on much risk here - they didn't discover this drug, didn't do the key clinical work on it, and don't even manufacture it themselves. Their business plan involves sitting back and collecting the rent, but that's what the law allows them to do.

In the end, if political pressure forces them to back down on their pricing, this will come down to a poor business decision. Companies should, in fact, charge what the market will bear - but KV may have neglected some other factors when they calculated what that price should be. Before setting a price, you should ask "Will the insurance companies pay?" and "Will Medicare pay?" and "Will people pay out of their own pocket?", but you should also ask "Will this price bring down so much controversy that we won't be able to make it stick?"

Comments (17) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Prices | Regulatory Affairs | Why Everyone Loves Us

March 11, 2011

Makena's Price: What to Do?

Email This Entry

Posted by Derek

The situation with KV Pharmaceuticals and the premature birth therapy Makena has been all over the news in the last couple of days. Briefly, Makena is an injectable progesterone formulation, given to women at risk of delivering prematurely. It went off the market in the early 1990s, because of side effect concerns and worries about overall efficacy, but since 2003 it's made an off-label comeback, thanks largely to a study at Wake Forest. This seemed to tip the risk/benefit ratio over to the favorable side.

Comes now the FDA and the provisions for orphan drugs. There is an official program offering market exclusivity to companies that are willing to take up such non-approved therapies and give them the full clinical and regulatory treatment. The idea, which is well-intentioned, as so many ideas are, was to bring these things in from the cold and give them more medical, scientific, and legal standing as things that had been through the whole review process. And that's what KV did. But this system says nothing about what the price of the drug will be during the years of exclusivity, in the same way that the approval process for new drugs says nothing about what their price will be when they come to market.

KV has decided that the price will now be about $1500 per patient, as opposed to about $15 before under the off-label regime. The reaction has been exactly what one would expect, and why not? Here, then are some thoughts:

Unfortunately, this should not have come as a surprise. It seems to have, though. The news stories are full of quotes from patients, doctors, and insurance companies saying that they never saw this coming. Look, though, at what happened recently with colchicine. Same situation. Same price jump. Same outrage, understandably. As long as these same incentives exist, any no-name generic company that comes along to adopt an old therapy and bring it into the modern regulatory regime can be assumed to be planning to run the price up to what they think the market will bear. That's why they're going to the trouble.

KV seems to have guessed correctly about the price. You wouldn't think so, with a hundred-fold increase. And the news stories, as I say, are full of (understandably) angry quotes from people at the insurance companies who will now be asked to pay. But (as that NPR link in the first paragraph says), Aetna, outraged or not, is going to pony up. It's going to cost them $20 to $30 million per year, most of which is going to go directly to KV's bottom line, but they're going to pay. And the other big health insurance providers seem to be doing the same. Meanwhile, the company has announced a program to provide low-cost treatment to people without insurance. From what I can see, it looks like basically everyone who had access to the drug before will have it now, the main difference being that the payers with deeper pockets will now be getting hammered on by KV. This is not a nice way t