Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

April 7, 2014

Outsourcing Everything

Email This Entry

Posted by Derek

Here's an article in Drug Discovery Today on "virtual pharmaceutical companies", and people who've been around the industry for some years must be stifling yawns already. That idea has been around a long time. The authors here defined a "VPC" as one that has a small managerial core, and outsources almost everything else:

The goal of a VPC is to reach fast proof of concept (PoC) at modest cost, which is enabled by the lack of expensive corporate infrastructure to be used for the project and by foregoing activities, such as synthesis optimization, which are unnecessary for the demonstration of PoC. . .The term ‘virtual’ refers to the business model of such a company based on the managerial core, which coordinates all activities with external providers, and on the lack of internal production or development facilities, rather than to the usage of the internet or electronic communication. Any service provider available on the market can be chosen for a project, because almost no internal investments in fixed assets are made.

And by necessity, such a company lives only to make deals with a bigger (non-virtual) company, one that can actually do the clinical trials, manufacturing, regulatory, sales and so on. There's another necessity - such a company has to get pretty nice chemical matter pretty quickly, it seems to me, in order to have something to develop. The longer you go digging through different chemical series and funny-looking SAR, all while doing it with outsourced chemistry and biology, the worse off you're going to be. If things are straightforward, it could work - but when things are straightforward, a lot of stuff can work. The point of having your own scientists (well, one big point) is for them to be able to react in real time to data and make their own decisions on where to go next. The better outsourcing people can do some of that, too, but their costs are not that big a savings, for that very reason. And it's never going to be as nimble as having your own researchers in-house. (If your own people aren't any more nimble than lower-priced contract workers, you have a different problem).

The people actually doing the managing have to be rather competent, too:

All these points suggest that the know-how and abilities of the members of the core management team are central to the success of a VPC, because they are the only ones with the full in-depth knowledge concerning the project. The managers must have strong industrial and academic networks, be decisive and unafraid to pull the plug on unpromising projects. They further need extensive expertise in drug development and clinical trial conduction, proven leadership and project management skills, entrepreneurial spirit and proficiency in handling suppliers. Of course, the crucial dependency on the skills of every single team member leaves little room for mistakes or incompetency, and the survival of a VPC might be endangered if one of its core members resigns unexpectedly

I think that the authors wanted to say "incompetence" rather than "incompetency" up there, but I believe that they're all native German speakers, so no problem. If that had come from some US-based consultants, I would have put it down to the same mental habit that makes people say "utilized" instead of "used". But the point is a good one: the smaller the organization, the less room there is to hide. A really large company can hol (and indeed, tends to accumulate) plenty of people who need the cover.

The paper goes on to detail several different ways that a VPC can work with a larger company. One of the ones I'm most curious about is the example furnished by Chorus and Eli Lilly. Chorus was founded from within Lilly as a do-everything-by-outsourcing team, and over the yeras, Lilly's made a number of glowing statements about how well they've worked out. I have, of course, no inside knowledge on the subject, but at the same time, many other large companies seem to have passed on the opportunity to do the same thing.

I continue to see the "VPC" model as a real option, but only in special situations. When there's a leg up on the chemistry and/or biology (a program abandoned by a larger company for business reasons, an older compound repurposed), then I think it can work. But trying it completely from the ground up seems problematic to me, but that could be because I've always worked in companies with in-house research. And it's true that even the stuff that's going on right down the hall doesn't work out all that often. One response to that is to say "Well, then, why not do the same thing more cheaply?" But another response is "If the odds are bad with your own people under your own roof, what are they when you contract everything out?"

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Development

March 25, 2014

A New Way to Study Hepatotoxicity

Email This Entry

Posted by Derek

Every medicinal chemist fears and respects the liver. That's where our drugs go to die, or at least to be severely tested by that organ's array of powerful metabolizing enzymes. Getting a read on a drug candidate's hepatic stability is a crucial part of drug development, but there's an ever bigger prize out there: predicting outright liver toxicity. That, when it happens, is very bad news indeed, and can torpedo a clinical compound that seemed to be doing just fine - up until then.

Unfortunately, getting a handle on liver tox has been difficult, even with such strong motivation. It's a tough problem. And given that most drugs are not hepatotoxic, most of the time, any new assay that overpredicts liver tox might be even worse than no assay at all. There's a paper in the latest Nature Biotechnology, though, that looks promising.

What the authors (from Stanford and Toronto) are doing is trying to step back to the early mechanism of liver damage. One hypothesis has been that the production of reactive oxygen species (ROS) inside hepatic cells is the initial signal of trouble. ROS are known to damage biomolecules, of course. But more subtly, they're also known to be involved in a number of pathways used to sense that cellular damage (and in that capacity, seem to be key players in inducing the beneficial effects of exercise, among other things). Aerobic cells have had to deal with the downsides of oxygen for so long that they've learned to make the most of it.
isoniazid%20image.png
This work (building on some previous studies from the same group) uses polymeric nanoparticles. They're semiconductors, and hooked up to be part of a fluorescence or chemiluminescence readout. (They use FRET for peroxynitrite and hypochlorite detection, more indicative of mitochondrial toxicity, and CRET for hydrogen peroxide, more indicative of Phase I metabolic toxicity). The particles are galactosylated to send them towards the liver cells in vivo, confirmed by necropsy and by confocal imaging. The assay system seemed to work well by itself, and in mouse serum, so they dosed it into mice and looked for what happened when the animals were given toxic doses of either acetominophen or isoniazid (both well-known hepatotox compounds at high levels). And it seems to work pretty well - they could image both the fluorescence and the chemiluminescence across a time course, and the dose/responses make sense. It looks like they're picking up nanomolar to micromolar levels of reactive species. They could also show the expected rescue of the acetominophen toxicity with some known agents (like GSH), but could also see differences between them, both in the magnitude of the effects and their time courses as well.

The chemiluminescent detection has been done before, as has the FRET one, but this one seems to be more convenient to dose, and having both ROS detection systems going at once is nice, too. One hopes that this sort of thing really can provide a way to get a solid in vivo read on hepatotoxicity, because we sure need one. Toxicologists tend to be a conservative bunch, with good reason, so don't look for this to revolutionize the field by the end of the year or anything. But there's a lot of promise here.

There are some things to look out for, though. For one, since these are necessarily being done in rodents, there will be differences in metabolism that will have to be taken into account, and some of those can be rather large. Not everything that injures a mouse liver will do so in humans, and vice versa. It's also worth remembering that hepatotoxicity is also a major problem with marketed drugs. That's going to be a much tougher problem to deal with, because some of these cases are due to overdose, some to drug-drug interactions, some to drug-alcohol interactions, and some to factors that no one's been able to pin down. One hopes, though, that if more drugs come through that show a clean liver profile that these problems might ameliorate a bit.

Comments (13) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics | Toxicology

March 20, 2014

Small Molecule Chemistry's "Limited Utility"?

Email This Entry

Posted by Derek

Over at LifeSciVC, guest blogger Jonathan Montagu talks about small molecules in drug discovery, and how we might move beyond them. Many of the themes he hits have come up around here, understandably - figuring why (and how) some huge molecules manage to have good PK properties, exploiting "natural-product-like" chemical space (again, if we can figure out a good way to do that), working with unusual mechanisms (allosteric sites, covalent inhibitors and probes), and so on. Well worth a read, even if he's more sanguine about structure-based drug discovery than I am. Most people are, come to think of it.

His take is very similar to what I've been telling people in my "state of drug discovery" presentations (at Illinois, most recently) - that we medicinal chemists need to stretch our definitions and move into biomolecule/small molecule hybrids and the like. These things need the techniques of organic chemistry, and we should be the people supplying them. Montagu goes even further than I do, saying that ". . .I believe that small molecule chemistry, as traditionally defined and practiced, has limited utility in today’s world." That may or may not be correct at the moment, but I'm willing to bet that it's going to become more and more correct in the future. We should plan accordingly.

Comments (31) + TrackBacks (0) | Category: Chemical Biology | Chemical News | Drug Development | Drug Industry History

March 17, 2014

Predicting What Group to Put On Next

Email This Entry

Posted by Derek

Here's a new paper in J. Med. Chem. on software that tries to implement matched-molecular-pair type analysis. The goal is a recommendation - what R group should I put on next?

Now, any such approach is going to have to deal with this paper from Abbott in 2008. In that one, an analysis of 84,000 compounds across 30 targets strongly suggested that most R-group replacements had, on average, very little effect on potency. That's not to say that they don't or can't affect binding, far from it - just that over a large series, those effects are pretty much a normal distribution centered on zero. There are also analyses that claim the same thing for adding methyl groups - to be sure, there are many dramatic "magic methyl" enhancement examples, but are they balanced out, on the whole, by a similar number of dramatic drop-offs, along with a larger cohort of examples where not much happened at all?

To their credit, the authors of this new paper reference these others right up front. The answer to these earlier papers, most likely, is that when you average across all sorts of binding sites, you're going to see all sorts of effects. For this to work, you've got a far better chance of getting something useful if you're working inside the same target or assay. Here we get to the nuts and bolts:

The predictive method proposed, Matsy, relies on the hypothesis that a particular matched series tends to have a preferred activity order, for example, that not all six possible orders of [Br, Cl, F] are equally frequent. . .Although a rather straightforward idea, we have been unable to find any quantitative analysis of this question in the literature.

So they go on to provide one, with halogen substituents. There's not much to be found comparing pairs of halogen compounds head to head, but when you go to the longer series, you find that the order Br > Cl > F > H is by far the most common (and that appears to be just a good old grease effect). The next most common order just swaps the bromine and chlorine, but the third most common is the original order, in reverse. The other end of the distribution is interesting, too - for example, the least most common order is Br > H > F > Cl, which is believable, since it doesn't make much sense along any property axis.

They go on to do the same sorts of analyses for other matched series, and the question then becomes, if you have such a matched series in your own SAR, what does that order tell you about what to make next? The idea of "SAR transfer" has been explored, and older readers will remember the Topliss tree for picking aromatic substituents (do younger ones?)

The Matsy algorithm may be considered a formalism of aspects of how a medicinal chemist works in practice. Observing a particular trend, a chemist considers what to make next on the basis of chemical intuition, experience with related compounds or targets, and ease of synthesis. The structures suggested by Matsy preserve the core features of molecules while recommending small modifications, a process very much in line with the type of functional group replacement that is common in lead optimization projects. This is in contrast to recommendations from fingerprint-based similarity comparisons where the structural similarity is not always straightforward to rationalize and near-neighbors may look unnatural to a medicinal chemist.

And there's a key point: prediction and recommendation programs walk a fine line, between "There's no way I'm going out of my way to make that" and "I didn't need this program to tell me this". Sometimes there's hardly any space between those two territories at all. Where do this program's recommendations fall? As companies try this out in-house, some people will be finding out. . .

Comments (13) + TrackBacks (0) | Category: Drug Development | In Silico

March 11, 2014

Compassionate Use: An Especially Tough Case

Email This Entry

Posted by Derek

Update: Chimerix says this evening that they will make their drug available to the boy in question as part of a new 20-patient open-label trial, after discussions with the FDA. This might have been the best way out of this, if it gives the company a better regulatory path forward at the same time. My guess, though, is that the company's position was becoming impossible to maintain no matter what.

Many of you will have seen the stories of a dying 7-year-old whose parents are seeking compassionate use access to a drug being developed by Chimerix. It's hard reading for a parent, or for anyone.

But I can do no better than echo John Carroll's editorial here What it comes down to, as far as I can see, is that a company this size will go bankrupt if it tries to deal with all these requests. So under the current system, we have a choice: let small companies try to discover drugs like this, without granting access, or wipe them out by making them grant it. Even for large companies, it's rough, as I wrote about here. I don't have a good solution.

Comments (58) + TrackBacks (0) | Category: Drug Development

February 19, 2014

Ligand Efficiency: A Response to Shultz

Email This Entry

Posted by Derek

I'd like to throw a few more logs on the ligand efficiency fire. Chuck Reynolds of J&J (author of several papers on the subject, as aficionados know) left a comment to an earlier post that I think needs some wider exposure. I've added links to the references:

An article by Shultz was highlighted earlier in this blog and is mentioned again in this post on a recent review of Ligand Efficiency. Shultz’s criticism of LE, and indeed drug discovery “metrics” in general hinges on: (1) a discussion about the psychology of various metrics on scientists' thinking, (2) an assertion that the original definition of ligand efficiency, DeltaG/HA, is somehow flawed mathematically, and (3) counter examples where large ligands have been successfully brought to the clinic.

I will abstain from addressing the first point. With regard to the second, the argument that there is some mathematical rule that precludes dividing a logarithmic quantity by an integer is wrong. LE is simply a ratio of potency per atom. The fact that a log is involved in computing DeltaG, pKi, etc. is immaterial. He makes a more credible point that LE itself is on average non-linear with respect to large differences in HA count. But this is hardly a new observation, since exactly this trend has been discussed in detail by previous published studies (here, here, here, and here). It is, of course, true that if one goes to very low numbers of heavy atoms the classical definition of LE gets large, but as a practical matter medicinal chemists have little interest in extremely small fragments, and the mathematical catastrophe he warns us against only occurs when the number of heavy atoms goes to zero (with a zero in the denominator it makes no difference if there is a log in the numerator). Why would HA=0 ever be relevant to a med. chem. program? In any case a figure essentially equivalent to the prominently featured Figure 1a in the Shultz manuscript appears in all of the four papers listed above. You just need to know they exist.

With regard to the third argument, yes of course there are examples of drugs that defy one or more of the common guidelines (e.g MW). This seems to be a general problem of the community taking metrics and somehow turning them into “rules.” They are just helpful, hopefully, guideposts to be used as the situation and an organization’s appetite for risk dictate. One can only throw the concept of ligand efficiency out the window completely if you disagree with the general principle that it is better to design ligands where the atoms all, as much as possible, contribute to that molecule being a drug (e.g. potency, solubility, transport, tox, etc.). The fact that there are multiple LE schemes in the literature is just a natural consequence of ongoing efforts to refine, improve, and better apply a concept that most would agree is fundamental to successful drug discovery.

Well, as far as the math goes, dividing a log by an integer is not any sort of invalid operation. I believe that [log(x)]/y is the same as saying log(x to the one over y). That is, log(16) divided by 2 is the same as the log of 16 to the one-half power, or log(4). They both come out to about 0.602. Taking a BEI calculation as real chemistry example, a one-micromolar compound that weighs 250 would, by the usual definition, -log(Ki)/(MW/1000), have a BEI of 6/0.25, or 24. By the above rule, if you want to keep everything inside the log function, then say -log(0.0000001) divided by 0.25, that one-micromolar figure should be raised to the fourth power, then you take the log of the result (and flip the sign). One-millionth to the fourth power is one times ten to the minus twenty-fourth, so that gives you. . .24. No problem.

Shultz's objection that LE is not linear per heavy atom, though, is certainly valid, as Reynolds notes above as well. You have to realize this and bear it in mind while you're thinking about the topic. I think that one of the biggest problems with these metrics - and here's a point that both Reynolds and Shultz can agree on, I'll bet - is that they're tossed around too freely by people who would like to use them as a substitute for thought in the first place.

Comments (19) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico

February 11, 2014

Drug Discovery in India

Email This Entry

Posted by Derek

Molecular biologist Swapnika Ramu, a reader from India, sends along a worthwhile (and tough) question. She says that after her PhD (done in the US), her return to India has made her "less than optimistic" about the current state of drug discovery there. (Links in the below quote have been added by me, not her:

Firstly, there isn't much by way of new drug development in India. Secondly, as you have discussed many times on your blog. . .drug pricing in India remains highly contentious, especially with the recent patent disputes. Much of the public discourse descends into anti-big pharma rhetoric, and there is little to no reasoned debate about how such issues should be resolved. . .

I would like to hear your opinion on what model of drug discovery you think a developing nation like India should adopt, given the constraints of finance and a limited talent pool. Target-based drug discovery was the approach that my previous company adopted, and not surprisingly this turned out to be a very expensive strategy that ultimately offered very limited success. Clearly, India cannot keep depending upon Western pharma companies to do all the heavy lifting when it comes to developing new drugs, simply to produce generic versions for the Indian public. The fact that several patents are being challenged in Indian courts would make pharma skittish about the Indian market, which is even more of a concern if we do not have a strong drug discovery ecosystem of our own. Since there isn't a robust VC-based funding mechanism, what do you think would be a good approach to spurring innovative drug discovery in the Indian context?

Well, that is a hard one. My own opinion is that India only has a limited talent pool as compared to Western Europe or the US - the country still has a lot more trained chemists and biologists than most other places. It's true, though, that the numbers don't tell the story very well. The best people from India are very, very good, but there are (from what I can see) a lot of poorly trained ones with degrees that seem (at least to me) worth very little. Still, you've still got a really substantial number of real scientists, and I've no doubt that India could have several discovery-driven drug companies if the financing were easier to come by (and the IP situation a bit less murky - those two factors are surely related). Whether it would have those, or even should, is another question.

As has been clear for a while, the Big Pharma model has its problems. Several players are in danger of falling out of the ranks (Lilly, AstraZeneca), and I don't really see anyone rising up to replace them. The companies that have grown to that size in the last thirty years mostly seem to be biotech-driven (Amgen, Biogen, Genentech as was, etc.)

So is that the answer? Should Indian companies try to work more in that direction than in small molecule drugs? Problem is, the barriers to entry in biotech-derived drugs are higher, and that strategy perhaps plays less to the country's traditional strengths in chemistry. But in the same way that even less-developed countries are trying to skip over the landline era of telephones and go straight to wireless, maybe India should try skipping over small molecules. I do hate to write that, but it's not a completely crazy suggestion.

But biomolecule or small organic, to get a lot of small companies going in India (and you would need a lot, given the odds) you would need a VC culture, which isn't there yet. The alternative (and it's doubtless a real temptation for some officials) would be for the government to get involved to try to start something, but I would have very low hopes for that, especially given the well-known inefficiencies of the Indian bureaucracy.

Overall, I'm not sure if there's a way for most countries not to rely on foreign companies for most (or all) of the new drugs that come along. Honestly, the US is the only country in the world that might be able to get along with only its own home-discovered pharmacopeia, and it would still be a terrible strain to lose the European (and Japanese) discoveries. Even the likes of Japan, Switzerland, and Germany use, for the most part, drugs that were discovered outside their own countries.

And in the bigger picture, we might be looking at a good old Adam Smith-style case of comparative advantage. It sure isn't cheap to discover a new drug in Boston, San Francisco, Basel, etc., but compared to the expense of getting pharma research in Hyderabad up to speed, maybe it's not quite as bad as it looks. In the longer term, I think that India, China, and a few other countries will end up with more totally R&D-driven biomedical research companies of their own, because the opportunities are still coming along, discoveries are still being made, and there are entrepreneurial types who may well feel like taking their chances on them. But it could take a long longer than some people would like, particularly researchers (like Swapnika Ramu) who are there right now. The best hope I can offer is that Indian entrepreneurs should keep their eyes out for technologies and markets that are new enough (and unexplored enough) so that they're competing on a more level playing field. Trying to build your own Pfizer is a bad idea - heck, the people who built Pfizer seem to be experiencing buyer's remorse themselves.

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

February 5, 2014

More Ligand Efficiency

Email This Entry

Posted by Derek

Here's a new review on ligand efficiency metrics in drug discovery. It references the papers that Michael Shultz has written on this topic, but (as far as I can tell) doesn't directly address his criticisms.
LE%20LLE.png
There's a lot of data in this paper, and it's worth reading for its discussion of ligand binding thermodynamics, even if you're not sure what you think about ligand efficiency. But if you are thinking about it (and I'd especially recommend thinking about LipE/LLE), then here's a chart to give you an idea of where you stand. It shows the mean LE and LLE values for a large range of compounds against 329 targets, and may give you something to shoot for. The LLE of carbonic anhydrase inhibitors is hard to beat (that sulfonamide binding to zinc does it), but, then, you're probably not targeting carbonic anhydrase, anyway.

Comments (9) + TrackBacks (0) | Category: Drug Development

January 22, 2014

A New Book on Scaffold Hopping

Email This Entry

Posted by Derek

I've been sent a copy of Scaffold Hopping in Medicinal Chemistry, a new volume from Wiley, edited by Nathan Brown of the Institute of Cancer Research in London. There are eighteen chapters - five on identifying and characterizing scaffolds to start with, ten on various computational approaches to scaffold-hopping, and three case histories.

One of the things you realize quickly when you starting thinking about (or reading about) that topic is that scaffolds are in the eye of the beholder, and that's what those first chapters are trying to come to grips with. Figuring out the "maximum common substructure" of a large group of analogs, for example, is not an easy problem at all, certainly not by eyeballing, and not through computational means, either (it's not solvable in polynomial time, if we want to get formal about it). One chemist will look at a pile of compounds and say "Oh yeah, the isoxazoles from Project XYZ", while someone who hasn't seen them before might say "Hmm, a bunch of amide heterocycles" or "A bunch of heterobiaryls" or what have you.

Another big question is how far you have to move in order to qualify as having hopped to another scaffold. My own preference is strictly empirical: if you've made a change that would be big enough to make most people draw a new Markush structure compared to your current series, you've scaffold-hopped. Ideally, you've kept the activity at your primary target, but changed it in the counterscreens or changed the ADMET properties. That's not to say that all these changes are going to be beneficial - people try this sort of thing all the time and wipe out the primary activity, or pick up even more clearance or hERG than the original series had. But those are the breaks.

And those are the main reasons that people do this sort of thing: to work out of a patent corner, to fix selectivity, or to get better properties. The appeal is that you might be able to address these without jettisoning everything you learned about the SAR of the previous compounds. If this is a topic of interest, especially from the computational angles, this book is certainly worth a look.

Comments (1) + TrackBacks (0) | Category: Drug Development | Patents and IP | Pharmacokinetics

January 13, 2014

Alnylam Makes It (As Does RNAi?)

Email This Entry

Posted by Derek

I've written about Alnylam, one of the flagship RNA interference companies, a few times around here. A couple of years ago, I was wondering if they'd win the race to come up with results that would keep the doors open.

Well, if you haven't been keeping up with the news in this space, they made it. Sanofi has just bought a large stake in the company, on the strength of the recent clinical results with patisiran, an RNAi therapy for the rare disease transthyretin-mediated amyloidosis (ATTR). Alnylam has a lot on their schedule these days, and the Sanofi deal will provide a big boost towards getting clinical data on all these ideas. Congratulations to them, and to RNAi in general, which has had a lengthy (and often overhyped) growth phase, and now might be starting to realize its promise.

Update: more on the story here.

Comments (7) + TrackBacks (0) | Category: Business and Markets | Drug Development

January 10, 2014

A New Look At Clinical Attrition

Email This Entry

Posted by Derek

Thanks to this new article in Nature Biotechnology, we have recent data on the failure rates in drug discovery. Unfortunately, this means that we have recent data on the failure rates in drug discovery, and the news is not good.

The study is the largest and most recent of its kind, examining success rates of 835 drug developers, including biotech companies as well as specialty and large pharmaceutical firms from 2003 to 2011. Success rates for over 7,300 independent drug development paths are analyzed by clinical phase, molecule type, disease area and lead versus nonlead indication status. . .Unlike many previous studies that reported clinical development success rates for large pharmaceutical companies, this study provides a benchmark for the broader drug development industry by including small public and private biotech companies and specialty pharmaceutical firms. The aim is to incorporate data from a wider range of clinical development organizations, as well as drug modalities and targets. . .

To illustrate the importance of using all indications to determine success rates, consider this scenario. An antibody is developed in four cancer indications, and all four indications transition successfully from phase 1 to phase 3, but three fail in phase 3 and only one succeeds in gaining FDA approval. Many prior studies reported this as 100% success, whereas our study differentiates the results as 25% success for all indications, and 100% success for the lead indication. Considering the cost and time spent on the three failed phase 3 indications, we believe including all 'development paths' more accurately reflects success and R&D productivity in drug development.

So what do they find? 10% of all indications in Phase I eventually make it through the FDA, which is in line with what most people think. Failure rates are in the thirty-percent range in Phase I, in the 60-percent range in Phase II, thirty to forty percent in Phase III, and in the teens at the NDA-to-approval stage. Broken out by drug class (antibody, peptide, small molecule, vaccine, etc.), the class with the most brutal attrition is (you guessed it) small molecules: slightly over 92% of them entering Phase I did not make it to approval.

If you look at things by therapeutic area, oncology has the roughest row to hoe with over 93% failure. Its failure rate is still over 50% in Phase III, which is particularly hair-raising. Infectious disease, at the other end of the scale, is merely a bit over 83%. Phase II is where the different diseases really separate out by chance of success, which makes sense.

Overall, this is a somewhat gloomier picture than we had before, and the authors have reasonable explanations for it:

Factors contributing to lower success rates found in this study include the large number of small biotech companies represented in the data, more recent time frame (2003–2011) and higher regulatory hurdles for new drugs. Small biotech companies tend to develop riskier, less validated drug classes and targets, and are more likely to have less experienced development teams and fewer resources than large pharmaceutical corporations. The past nine-year period has been a time of increased clinical trial cost and complexity for all drug development sponsors, and this likely contributes to the lower success rates than previous periods. In addition, an increasing number of diseases have higher scientific and regulatory hurdles as the standard of care has improved over the past decade.

So there we have it - if anyone wants numbers, these are the numbers. The questions are still out there for all of us, though: how sustainable is a business with these kinds of failure rates? How feasible are the pricing strategies that can accommodate them? And what will break out out of this system, anyway?

Comments (12) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History

January 9, 2014

Three Options With Five Billion to Spend

Email This Entry

Posted by Derek

Here's a discussion on a tricky question:

If you had $5 billion to invest, which is the current average R&D spend required to develop and launch just one new drug (without any guarantee of reimbursement and market success), how would you invest it:

1. Directly into your internal R&D pipeline, based on established approaches to drug discovery and development which are currently giving an ROI of only about 5% (and rapidly declining)?

2. Acquiring new product candidates externally, at full market price in an increasingly competitive environment?

3. Building a portfolio of say 50 independent projects to explore completely new and different approaches to drug discovery & development, each with a 2% probability of doubling your ROI indefinitely into the future?

Those alternatives are not exactly phrased in a neutral manner, but they're hard to be neutral about. My take on them is that option (1) is the one that's least likely to get you removed by your board of directors, because it spreads the blame around. Option (2) would be insane, if it were the only thing you did, but perfectly reasonable as a complement to either (1) or (3). And option (3), well. . .the problem with that one is finding 50 "completely new and different" approaches to drug R&D. I don't think that there are that many, honestly. And I also doubt (very strongly) if they all have as much as a 2% chance of succeeding.

So if I were a CEO, and God forbid, I would do enough (1) to buy me some cover, enough (2) to try to keep the Street happy, and spend whatever I had left on (3). I would not, of course, phrase my decisions in those terms. Sound good?

Comments (41) + TrackBacks (0) | Category: Drug Development

January 6, 2014

Positive Rules and Negative Ones

Email This Entry

Posted by Derek

I enjoyed this take on med-chem, and I think he's right:

There are a large set of "don't do this". When they predict failure, you usually shouldn't go there as these rules are moderately reliable.

There is an equally large set of "when you encounter this situation, try this" rules. Their positive predictive power is very very low.

Even the negative rule, the what-to-avoid category, aren't as hard as fast as one would like. There are some pretty unlikely-looking drugs out there (fosfomycin, nitroglycerine, suramin, and see that link above for more). These structures aren't telling you to go out and immediately start imitating them, but what they are telling you is that things that you'd throw away can work.

But those rules are still right more often than the "Here's what to do when . . ." ones, as John Alan Tucker is saying. Every experienced medicinal chemist has a head full of these things - reduce basicity to get out of hERG problems, change the logP for blood-brain-barrier penetration, substitute next to a phenol to slow glucuronidation, switch tetrazole/COOH, make a prodrug, change the salt, and on and on. These work, sometimes, but you have to try them every time before moving on to anything more exotic.

And it's the not-always-right nature of the negative rules, coupled with the not-completely-useless nature of the positive ones, that gives everyone room to argue. Someone has always tried XYZ that worked, while someone else has always tried XYZ when it didn't do a thing. Pretty much any time you try to lay down the law about structures that should or shouldn't be made, you can find arguments on the other side. The rule-of-five type guidelines look rather weak when you think about all the exceptions to them, but they look pretty strong when you compare them to all the other rules that people have tried, and so on.

In the end, all we can do is narrow our options down from an impossible number to a highly improbable number. When (or if) we can do better, medicinal chemistry will change a great deal, but until then. . .

Comments (8) + TrackBacks (0) | Category: Drug Development | Life in the Drug Labs

December 3, 2013

Merck's Drug Development in The New Yorker

Email This Entry

Posted by Derek

The New Yorker has an article about Merck's discovery and development of suvorexant, their orexin inhibitor for insomnia. It also goes into the (not completely reassuring) history of zolpidem (known under the brand name of Ambien), which is the main (and generic) competitor for any new sleep drug.

The piece is pretty accurate about drug research, I have to say:

John Renger, the Merck neuroscientist, has a homemade, mocked-up advertisement for suvorexant pinned to the wall outside his ground-floor office, on a Merck campus in West Point, Pennsylvania. A woman in a darkened room looks unhappily at an alarm clock. It’s 4 a.m. The ad reads, “Restoring Balance.”

The shelves of Renger’s office are filled with small glass trophies. At Merck, these are handed out when chemicals in drug development hit various points on the path to market: they’re celebrations in the face of likely failure. Renger showed me one. Engraved “MK-4305 PCC 2006,” it commemorated the day, seven years ago, when a promising compound was honored with an MK code; it had been cleared for testing on humans. Two years later, MK-4305 became suvorexant. If suvorexant reaches pharmacies, it will have been renamed again—perhaps with three soothing syllables (Valium, Halcion, Ambien).

“We fail so often, even the milestones count for us,” Renger said, laughing. “Think of the number of people who work in the industry. How many get to develop a drug that goes all the way? Probably fewer than ten per cent.”

I well recall when my last company closed up shop - people in one wing were taking those things and lining them up out on a window shelf in the hallway, trying to see how far they could make them reach. Admittedly, they bulked out the lineup with Employee Recognition Awards and Extra Teamwork awards, but there were plenty of oddly shaped clear resin thingies out there, too.

The article also has a good short history of orexin drug development, and it happens just the way I remember it - first, a potential obesity therapy, then sleep disorders (after it was discovered that a strain of narcoleptic dogs lacked functional orexin receptors).

Mignot recently recalled a videoconference that he had with Merck scientists in 1999, a day or two before he published a paper on narcoleptic dogs. (He has never worked for Merck, but at that point he was contemplating a commercial partnership.) When he shared his results, it created an instant commotion, as if he’d “put a foot into an ants’ nest.” Not long afterward, Mignot and his team reported that narcoleptic humans lacked not orexin receptors, like dogs, but orexin itself. In narcoleptic humans, the cells that produce orexin have been destroyed, probably because of an autoimmune response.

Orexin seemed to be essential for fending off sleep, and this changed how one might think of sleep. We know why we eat, drink, and breathe—to keep the internal state of the body adjusted. But sleep is a scientific puzzle. It may enable next-day activity, but that doesn’t explain why rats deprived of sleep don’t just tire; they die, within a couple of weeks. Orexin seemed to turn notions of sleep and arousal upside down. If orexin turns on a light in the brain, then perhaps one could think of dark as the brain’s natural state. “What is sleep?” might be a less profitable question than “What is awake?”

There's also a lot of good coverage of the drug's passage through the FDA, particularly the hearing where the agency and Merck argued about the dose. (The FDA was inclined towards a lower 10-mg tablet, but Merck feared that this wouldn't be enough to be effective in enough patients, and had no desire to launch a drug that would get the reputation of not doing very much).

few weeks later, the F.D.A. wrote to Merck. The letter encouraged the company to revise its application, making ten milligrams the drug’s starting dose. Merck could also include doses of fifteen and twenty milligrams, for people who tried the starting dose and found it unhelpful. This summer, Rick Derrickson designed a ten-milligram tablet: small, round, and green. Several hundred of these tablets now sit on shelves, in rooms set at various temperatures and humidity levels; the tablets are regularly inspected for signs of disintegration.

The F.D.A.’s decision left Merck facing an unusual challenge. In the Phase II trial, this dose of suvorexant had helped to turn off the orexin system in the brains of insomniacs, and it had extended sleep, but its impact didn’t register with users. It worked, but who would notice? Still, suvorexant had a good story—the brain was being targeted in a genuinely innovative way—and pharmaceutical companies are very skilled at selling stories.

Merck has told investors that it intends to seek approval for the new doses next year. I recently asked John Renger how everyday insomniacs would respond to ten milligrams of suvorexant. He responded, “This is a great question.”

There are, naturally, a few shots at the drug industry throughout the article. But it's not like our industry doesn't deserve a few now and then. Overall, it's a good writeup, I'd say, and gets across the later stages of drug development pretty well. The earlier stages are glossed over a bit, by comparison. If the New Yorker would like for me to tell them about those parts sometime, I'm game.

Comments (28) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System

November 25, 2013

Lipinski's Anchor

Email This Entry

Posted by Derek

Michael Shultz of Novartis is back with more thoughts on how we assign numbers to drug candidates. Previously, he's written about the mathematical wrongness of many of the favorite metrics (such as ligand efficiency), in a paper that stirred up plenty of comment.

His new piece in ACS Medicinal Chemistry Letters is well worth a look, although I confess that (for me) it seemed to end just when it was getting started. But that's the limitation of a Viewpoint article for a subject with this much detail in it.

Shultz makes some very good points by referring to Daniel Kahneman's Thinking, Fast and Slow, a book that's come up several times around here as well (in both posts and comments). The key concept here is called "attribute substitution", which is the mental process by which we take a complex situation, which we find mentally unworkable, and try to substitute some other scheme which we can deal with. We then convince ourselves, often quickly, silently, and without realizing that we're doing it, that we now have a handle on the situation, just because we now have something in our heads that is more understandable. That "Ah, now I get it" feeling is often a sign that you're making headway on some tough subject, but you can also get it when you're understanding something that doesn't help you with it at all.

And I'd say that this is the take-home for this whole Viewpoint article, that we medicinal chemists are fooling ourselves when we use ligand efficiency and similar metrics to try to understand what's going on with our drug candidates. Shultz go on to discuss what he calls "Lipinski's Anchor". Anchoring is another concept out of Thinking Fast and Slow, and here's the application:

The authors of the ‘rules of 5’ were keenly aware of their target audience (medicinal chemists) and “deliberately excluded equations and regression coefficients...at the expense of a loss of detail.” One of the greatest misinterpretations of this paper was that these alerts were for drug-likeness. The authors examined the World Drug Index (WDI) and applied several filters to identify 2245 drugs that had at least entered phase II clinical development. Applying a roughly 90% cutoff for property distribution, the authors identified four parameters (MW, logP, hydrogen bond donors, and hydrogen bond acceptors) that were hypothesized to influence solubility and permeability based on their difference from the remainder of the WDI. When judging probability, people rely on representativeness heuristics (a description that sounds highly plausible), while base-rate frequency is often ignored. When proposing oral drug-like properties, the Gaussian distribution of properties was believed, de facto, to represent the ability to achieve oral bioavailability. An anchoring effect is when a number is considered before estimating an unknown value and the original number significantly influences future estimates. When a simple, specific, and plausible MW of 500 was given as cutoff for oral drugs, this became the mother of all medicinal chemistry anchors.

But how valid are molecular weight cutoffs, anyway? That's a topic that's come up around here a few times, too, as well it should. Comparisons of the properties of orally available drugs across their various stages of development seem to suggest that such measurements converge on what we feel are the "right" values, but as Shultz points out, there could be other reasons for the data to look that way. And he makes this recommendation: "Since the average MW of approved oral drugs has been increasing while the failure rate due to PK/biovailability has been decreasing, the hypothesis linking size and bioavailability should be reconsidered."

I particularly like another line, which could probably serve as the take-home message for the whole piece: "A clear understanding of probabilities in drug discovery is impossible due to the large number of known and unknown variables." I agree. And I think that's the root of the problem, because a lot of people are very, very uncomfortable with that kind of talk. The more business-school training they have, the less they like the sound of it. The feeling is that if we'd just use modern management techniques, it wouldn't have to be this way. Closer to the science end of things, the feeling is that if we'd just apply the right metrics to our work, it wouldn't have to be that way, either. Are both of these mindsets just examples of attribute substitution at work?

In the past, I've said many times that if I had to work from a million compounds that were within rule-of-five cutoffs versus a million that weren't, I'd go for the former every time. And I'm still not ready to ditch that bias, but I'm certainly ready to start running up the Jolly Roger about things like molecular weight. I still think that the clinical failure rate is higher for significantly greasier compounds (both because of PK issues and because of unexpected tox). But molecular weight might not be much of a proxy for the things we care about.

This post is long enough already, so I'll address Shultz's latest thoughts on ligand efficiency in another entry. For those who want more 50,000-foot viewpoints on these issues, though, these older posts will have plenty.

Comments (44) + TrackBacks (0) | Category: Drug Development | Drug Industry History

November 12, 2013

Leaving Antibiotics: An Interview

Email This Entry

Posted by Derek

Here's the (edited) transcript of an interview that Pfizer's VP of clinical research, Charles Knirsch, gave to PBS's Frontline program. The subject was the rise of resistant bacteria - which is a therapeutic area that Pfizer is no longer active in.

And that's the subject of the interview, or one of its main subjects. I get the impression that the interviewer would very much like to tell a story about how big companies walked away to let people die because they couldn't make enough money off of them:

. . .If you look at the course of a therapeutic to treat pneumonia, OK, … we make something, a macrolide, that does that. It’s now generic, and probably the whole course of therapy could cost $30 or $35. Even when it was a branded antibiotic, it may have been a little bit more than that.

So to cure pneumonia, which in some patient populations, particularly the elderly, has a high mortality, that’s what people are willing to pay for a therapeutic. I think that there are differences across different therapeutic areas, but for some reason, with antibacterials in particular, I think that society doesn’t realize the true value.

And did it become incumbent upon you at some point to make choices about which things would be in your portfolio based on this?

Based on our scientific capabilities and the prudent allocation of capital, we do make these choices across the whole portfolio, not just with antibacterials.

But talk to me about the decision that went into antibacterials. Pfizer made a decision in 2011 and announced the decision. Obviously you were making choices among priorities. You had to answer to your shareholders, as you’ve explained, and you shifted. What went into that decision?

I think that clearly our vaccine platforms are state of the art. Our leadership of the vaccine group are some of the best people in the industry or even across the industry or anywhere really. We believe that we have a higher degree of success in those candidates and programs that we are currently prosecuting.

So it’s a portfolio management decision, and if our vaccine for Clostridium difficile —

A bacteria.

Yeah, a bacteria which is a major cause of both morbidity and mortality of patients in hospitals, the type of thing that I would have been consulted on as an infectious disease physician, that in fact we will prevent that, and we’ll have a huge impact on human health in the hospitals.

But did that mean that you had to close down the antibiotic thing to focus on vaccines? Why couldn’t you do both?

Oh, good question. And it’s not a matter of closing down antibiotics. We were having limited success. We had had antibiotics that we would get pretty far along, and a toxicity would emerge either before we even went into human testing or actually in human testing that would lead to discontinuation of those programs. . .

It's that last part that I think is insufficiently appreciated. Several large companies have left the antibiotic field over the years, but several stayed (GlaxoSmithKline and AstraZeneca come to mind). But the ones who stayed were not exactly rewarded for their efforts. Antibacterial drug discovery, even if you pour a lot of money and effort into it, is very painful. And if you're hoping to introduce a mechanism of action into the field, good luck. It's not impossible, but if it were easy to do, more small companies would have rushed in to do it.

Knirsch doesn't have an enviable task here, because the interviewer pushes him pretty hard. Falling back on the phrase "portfolio management decisions" doesn't help much, though:

In our discussion today, I get the sense that you have to make some very ruthless decisions about where to put the company’s capital, about where to invest, about where to put your emphasis. And there are whole areas where you don’t invest, and I guess the question we’re asking is, do you learn lessons about that? When you pulled out of Gram-negative research like that and shifted to vaccines, do you look back on that and say, “We learned something about this”?

These are not ruthless decisions. These are portfolio decisions about how we can serve medical need in the best way. …We want to stay in the business of providing new therapeutics for the future. Our investors require that of us, I think society wants a Pfizer to be doing what we do in 20 years. We make portfolio management decisions.

But you didn’t stay in this field, right? In Gram negatives you didn’t really stay in that field. You told me you shifted to a new approach.

We were not having scientific success, there was no clear regulatory pathway forward, and the return on any innovation did not appear to be something that would support that program going forward.

Introducing the word "ruthless" was a foul, and I'm glad the whistle was blown. I might have been tempted to ask the interviewer what it meant, ruthless, and see where that discussion went. But someone who gives in to temptations like that probably won't make VP at Pfizer.

Comments (51) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Infectious Diseases

November 11, 2013

The Past Twenty Years of Drug Development, Via the Literature

Email This Entry

Posted by Derek

Here's a new paper in PlOSOne on drug development over the past 20 years. The authors are using a large database of patents and open literature publications, and trying to draw connections between those two, and between individual drug targets and the number of compounds that have been disclosed against them. Their explanation of patents and publications is a good one:

. . .We have been unable to find any formal description of the information flow between these two document types but it can be briefly described as follows. Drug discovery project teams typically apply for patents to claim and protect the chemical space around their lead series from which clinical development candidates may be chosen. This sets the minimum time between the generation of data and its disclosure to 18 months. In practice, this is usually extended, not only by the time necessary for collating the data and drafting the application but also where strategic choices may be made to file later in the development cycle to maximise the patent term. It is also common to file separate applications for each distinct chemical series the team is progressing.

While some drug discovery operations may eschew non-patent disclosure entirely, it is nevertheless common practice (and has business advantages) for project teams to submit papers to journals that include some of the same structures and data from their patents. While the criteria for inventorship are different than for authorship, there are typically team members in-common between the two types of attribution. Journal publications may or may not identify the lead compound by linking the structure to a code name, depending on how far this may have progressed as a clinical candidate.

The time lag can vary between submitting manuscripts immediately after filing, waiting until the application has published, deferring publication until a project has been discontinued, or the code name may never be publically resolvable to a structure. A recent comparison showed that 6% of compound structures exemplified in patents were also published in journal articles. While the patterns described above will be typical for pharmaceutical and biotechnology companies, the situation in the academic sector differs in a number of respects. Universities and research institutions are publishing increasing numbers of patents for bioactive compounds but their embargo times for publication and/or upload of screening results to open repositories, such as PubChem BioAssay, are generally shorter.

There are also a couple of important factors to keep in mind during the rest of the analysis. The authors point out that their database includes a substantial number of "compounds" which are not small, drug-like molecules (these are antibodies, proteins, large natural products, and so on). (In total, from 1991 to 2010 they have about one million compounds from journal articles and nearly three million from patents). And on the "target" side of the database, there are a significant number of counterscreens included which are not drug targets as such, so it might be better to call the whole thing a compound-to-protein mapping exercise. That said, what did they find?
compounds%20targets%20year%20chart.png
Here's the chart of compounds/target, by year. The peak and decline around 2005 is quite noticeable, and is corroborated by a search through the PCT patent database, which shows a plateau in pharmaceutical patents around this time (which has continued until now, by the way).

Looking at the target side of things, with those warnings above kept in mind, shows a different picture. The journal-publication side of things really has shown an increase over the last ten years, with an apparent inflection point in the early 2000s. What happened? I'd be very surprised if the answer didn't turn out to be genomics. If you want to see the most proximal effect of the human genomics frenzy from around that time, there you have it in the way that curve bends around 2001. Year-on-year, though (see the full paper for that chart), the targets mentioned in journal publications seem to have peaked in 2008 or so, and have either plateaued or actually started to come back down since then. Update: Fixed the second chart, which had been a duplicate of the first).
targets%20source%20year.png
The authors go on to track a number of individual targets by their mentions in patents and journals, and you can certainly see a lot of rise-and-fall stories over the last 20 years. Those actual years should not be over-interpreted, though, because of the delays (mentioned above) in patenting, and the even longer delays, in some cases, for journal publication from inside pharma organizations.

So what's going on with the apparent decline in output? The authors have some ideas, as do (I'm sure) readers of this site. Some of those ideas probably overlap pretty well:

While consideration of all possible causative factors is outside the scope of this work it could be speculated that the dominant causal effect on global output is mergers and acquisition activity (M&A) among pharmaceutical companies. The consequences of this include target portfolio consolidations and the combining of screening collections. This also reduces the number of large units competing in the production of medicinal chemistry IP. A second related factor is less scientists engaged in generating output. Support for the former is provided by the deduction that NME output is directly related to the number of companies and for the latter, a report that US pharmaceutical companies are estimated to have lost 300,000 jobs since 2000. There are other plausible contributory factors where finding corroborative data is difficult but nonetheless deserve comment. Firstly, patent filing and maintenance costs will have risen at approximately the same rate as compound numbers. Therefore part of the decrease could simply be due to companies, quasi-synchronously, reducing their applications to control costs. While this happened for novel sequence filings over the period of 1995–2000, we are neither aware any of data source against which this hypothesis could be explicitly tested for chemical patenting nor any reports that might support it. Similarly, it is difficult to test the hypothesis of resource switching from “R” to “D” as a response to declining NCE approvals. Our data certainly infer the shrinking of “R” but there are no obvious metrics delineating a concomitant expansion of “D”. A third possible factor, a shift in the small-molecule:biologicals ratio in favour of the latter is supported by declared development portfolio changes in recent years but, here again, proving a causative coupling is difficult.

Causality is a real problem in big retrospectives like this. The authors, as you see, are appropriately cautious. (They also mention, as a good example, that a decline in compounds aimed at a particular target can be a signal of both success and of failure). But I'm glad that they've made the effort here. It looks like they're now analyzing the characteristics of the reported compounds with time and by target, and I look forward to seeing the results of that work.

Update: here's a lead author of the paper with more in a blog post.

Comments (22) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Patents and IP | The Scientific Literature

November 8, 2013

Exiting Two Therapeutic Areas

Email This Entry

Posted by Derek

So Bristol-Myers Squibb did indeed re-org itself yesterday, with the loss of about 75 jobs (and the shifting around of 300 more, which will probably result in some job losses as well, since not everyone is going to be able to do that). And they announced that they're getting out of two therapeutic areas, diabetes and neuroscience.

Those would be for very different reasons. Neuro is famously difficult and specialized. There are huge opportunities there, but they're opportunities because no one's been able to do much with them, for a lot of good reasons. Some of the biggest tar pits of drug discovery are to be found there (Alzheimer's, chronic pain), and even the diseases for which we have some treatments are near-total black boxes, mechanistically (schizophrenia, epilepsy and seizures). The animal models are mysterious and often misleading, and the clinical trials for the biggest diseases in this area are well-known to be expensive and tricky to run. You've got your work cut out for you over here.

Meanwhile, the field of diabetes and metabolic disorders is better served. For type I diabetes, the main thing you can do, short of finding ever more precise ways of dosing insulin, is to figure out how to restore islet function and cure it, and that's where all the effort seems to be going. For type II diabetes, which is unfortunately a large market and getting larger all the time, there are a number of therapeutic options. And while there's probably room for still more, the field is getting undeniably a bit crowded. Add that to the very stringent cardiovascular safety requirements, and you're looking at a therapeutic that's not as attractive for new drug development as it was ten or fifteen years ago.

So I can see why a company would get out of these two areas, although it's also easy to think that it's a shame for this to happen. Neuroscience is in a particularly tough spot. The combination of uncertainly and big opportunities would tend to draw a lot of risk-taking startups to the area, but the massive clinical trials needed make it nearly impossible for a small company to get serious traction. So what we've been seeing are startups that, even more than other areas, are focused on getting to the point that a larger company will step in to pay the bills. That's not an abnormal business model, but it has its hazards, chief among them the temptation to run what trials you can with a primary goal of getting shiny numbers (and shiny funding) rather than finding out whether the drug has a more solid chance of working. Semi-delusional Phase II trials are a problem throughout the industry, but more so here.

Comments (58) + TrackBacks (0) | Category: Business and Markets | Diabetes and Obesity | Drug Development | The Central Nervous System

October 31, 2013

Merck's Aftermath

Email This Entry

Posted by Derek

So the picture that's emerging of Merck's drug discovery business after this round of cuts is confused, but some general trends seem to be present. West Point appears to have been very severely affected, with a large number of chemists shown the door, and reports tend to agree that bench chemists were disproportionately hit. The remaining department would seem to be top-heavy with managers.

Top-heavy, that is, unless the idea is that they're all going to be telling cheaper folks overseas what to make, that is. So is Merck going over to the Pfizer-style model? I regard this as unproven on this scale. In fact, I have an even lower opinion of it than that, but I'm sure that my distaste for the idea is affecting my perceptions, so I have to adjust accordingly. (Not everything you dislike is incorrect, just as not every person that's annoying is wrong).

But it's worth realizing that this is a very old idea. It's Taylorism, after Frederick Taylor, whose thinking was very influential in business circles about 100 years ago. (That Wikipedia article is written in a rather opinionated style, which the site has flagged, but it's a very interesting read and I recommend it). One of Taylor's themes was division of labor between the people thinking about the job and the people doing it, and a clearer statement of what Pfizer (and now Merck) are trying to do is hard to come by.

The problem is, we are not engaged in the kind of work that Taylorism and its descendants have been most successfully applied to. That, of course, is assembly line work, or any work flow that consists of defined, optimizable processes. R&D has proven. . .resistant to such thinking, to put it mildly. It's easy to convince yourself that drug discovery consists of and should be broken up into discrete assembly-line units, but somehow the cranks don't turn very smoothly when such systems are built. Bits and pieces of the process can be smoothed out and improved, but the whole thing still seems tangled, somehow.

In fact, if I can use an analogy from the post I put up earlier this morning, it reminds me of the onset of turbulence from a regime of laminar flow. If you model the kinds of work being done in some sort of hand-waving complexity space, up to a point, things run smoothly and go where they're supposed to. But as you start to add in key steps where the driving forces, the real engines of progress, are things that have to be invented afresh each time and are not well understood to start with, then you enter turbulence. The workflow become messy and unpredictable. If your Reynolds numbers are too high, no amount of polish and smoothing will stop you from seeing turbulent flow. If your industrial output depends too much on serendipity, on empiricism, and on mechanisms that are poorly understood, then no amount of managerial smoothing will make things predictable.

This, I think, is my biggest problem with the "Outsource the grunt work and leave the planning to the higher-ups" idea. It assumes that things work more smoothly than they really do in this business. I'm also reminded a bit of the Chilean "Project Cybersyn", which was to be a sort of control room where wise planners could direct the entire country's economy. One of the smaller reasons to regret the 1973 coup against Allende is that the chance was missed to watch this system bang up against reality. And I wonder what will happen as this latest drug discovery scheme runs into it, too.

Update: a Merck employee says in the comments that there hasn't been talk of more outsourcing, If that proves to be the case, then just apply the above comments to Pfizer.

Comments (98) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Life in the Drug Labs

October 30, 2013

More Magic Methyls, Please

Email This Entry

Posted by Derek

Medicinal chemists have long been familiar with the "magic methyl" effect. That's the dramatic change in affinity that can be seen (sometimes) with the addition of a single methyl group in just the right place. (Alliteration makes that the phrase of choice, but there are magic fluoros, magic nitrogens, and others as well). The methyl group is also particularly startling to a chemist, because it's seen as electronically neutral and devoid of polarity - it's just a bump on the side of the molecule, right?
Magic%20methyl.png
Some bump. There's a very useful new paper in Angewandte Chemie that looks at this effect, and I have to salute the authors. They have a number of examples from the recent literature, and it couldn't have been easy to round them up. The methyl groups involved tend to change rotational barriers around particular bonds, alter the conformation of saturated rings, and/or add what is apparently just the right note of nonpolar interaction in some part of a binding site. It's important to remember just how small the energy changes need to be for things like this to happen.

The latter part of the paper summarizes the techniques for directly introducing methyl groups (as opposed to going back to the beginning of the sequence with a methylated starting material). And the authors call for more research into such reactions: wouldn't it be useful to be able to just staple a methyl group in next to the nitrogen of a piperidine, for example, rather than having to redo the whole synthesis? There are ways to methylate aryl rings, via metal-catalyzed couplings or lithium chemistry, but alkyl methylations are thin on the ground. (The ones that exist tend to rely on those same sorts of mechanisms).

Methyl-group reagents of the same sort that have been found for trifluoromethyl groups in recent years would be welcome - the sorts of things you could expose a compound to and have it just methylate the most electrophilic or nucleophilic site(s) to see what you'd get. This is part of a general need for alkyl C-H activation chemistries, which people have been working on for quite a while now. It's one of the great undersolved problems in synthetic chemistry, and I hope that progress gets made. Otherwise I might have to break into verse again, and no one wants that.

Comments (17) + TrackBacks (0) | Category: Chemical News | Drug Development

October 22, 2013

Size Doesn't Matter. Does Anything?

Email This Entry

Posted by Derek

There's a new paper in Nature Reviews Drug Discovery that tries to find out what factors about a company influence its research productivity. This is a worthy goal, but one that's absolutely mined with problems in gathering and interpreting the data. The biggest one is the high failure rate that afflicts everyone in the clinic: you could have a company that generates a lot of solid ideas, turns out good molecules, gets them into humans with alacrity, and still ends up looking like a failure because of mechanistic problems or unexpected toxicity. You can shorten those odds, for sure (or lengthen them!), but you can never really get away from that problem, or not yet.

The authors have a good data set to work from, though:

It is commonly thought that small companies have higher research and development (R&D) productivity compared with larger companies because they are less bureaucratic and more entrepreneurial. Indeed, some analysts have even proposed that large companies exit research altogether. The problem with this argument is that it has little empirical foundation. Several high-quality analyses comparing the track record of smaller biotechnology companies with established pharmaceutical companies have concluded that company size is not an indicator of success in terms of R&D productivity1, 2.

In the analysis presented here, we at The Boston Consulting Group examined 842 molecules over the past decade from 419 companies, and again found no correlation between company size and the likelihood of R&D success. But if size does not matter, what does?

Those 842 molecules cover the period 2002-2011, and of them, 205 made it to regulatory approval. (Side note: does this mean that the historical 90% failure rate no longer applies? Update: turns out that's the number of compounds that made it through Phase I, which sounds more like it). There were plenty of factors that seemed to have no discernable influence on success - company size, as mention, public versus private financing, most therapeutic area choices, market size for the proposed drug or indication, location in the US, Europe, or Asia, and so on. In all these cases, the size of the error bars leave one unable to reject the null hypothesis (variation due to chance alone).

What factors do look like more than chance? The far ends of the therapeutic area choice, for one (CNS versus infectious disease, and these two only). But all the other indicators are a bit fuzzier. Publications (and patents) per R&D dollar spent are a positive sign, as is the experience (time-in-office) of the R&D heads. A higher termination rate in preclinical and Phase I correlated with eventual success, although I wonder if that's also a partial proxy for desperation, companies with no other option but to push on and hope for the best (see below for more on this point). A bit weirdly, frequent mention of ROI and the phrase "decision making" actually correlated positively, too.

The authors interpret most or all of these as proxy measurements of "scientific acumen and good judgement", which is a bit problematic. It's very easy to fall into circular reasoning that way - you can tell that the companies that succeeded had good judgement, because their drugs succeeded, because of their good judgement. But I can see the point, which is what most of us already knew: that experience and intelligence are necessary in this business, but not quite sufficient. And they have some good points to make about something that would probably help:

A major obstacle that we see to achieving greater R&D productivity is the likelihood that many low-viability compounds are knowingly being progressed to advanced phases of development. We estimate that 90% of industry R&D expenditures now go into molecules that never reach the market. In this context, making the right decision on what to progress to late-stage clinical trials is paramount in driving productivity. Indeed, researchers from Pfizer recently published a powerful analysis showing that two-thirds of the company's Phase I assets that were progressed could have been predicted to be likely failures on the basis of available data3. We have seen similar data privately as part of our work with many other companies.

Why are so many such molecules being advanced across the industry? Here, a behavioural perspective could provide insight. There is a strong bias in most R&D organizations to engage in what we call 'progression-seeking' behaviour. Although it is common knowledge that most R&D projects will fail, when we talk to R&D teams in industry, most state that their asset is going to be one of the successes. Positive data tends to go unquestioned, whereas negative data is parsed, re-analysed, and, in many cases, explained away. Anecdotes of successful molecules saved from oblivion often feed this dynamic. Moreover, because it is uncertain which assets will fail, the temptation is to continue working on them. This reaction is not surprising when one considers that personal success for team members is often tied closely to project progression: it can affect job security, influence within the organization and the ability to pursue one's passion. In this organizational context, progression-seeking behaviour is entirely rational.

Indeed it is. The sunk-cost fallacy should also be added in there, the "We've come so far, we can't quit now" thinking that has (in retrospect) led so many people into the tar pit. But they're right, many places end up being built to check the boxes and make the targets, not necessarily to get drugs out the door. If your organization's incentives are misaligned, the result is similar to trying to drive a nail by hitting it from an angle instead of straight on: all that force, being used to mess things up.

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

October 8, 2013

Forecasting Drug Sales: Har, Har.

Email This Entry

Posted by Derek

You're running a drug company, and you have a new product coming out. How much of it do you expect to sell? That sounds like a simple question to answer, but it's anything but, as a new paper in Nature Reviews Drug Discovery (from people at McKinsey, no less) makes painfully clear.

Given the importance of forecasting, we set out to investigate three questions. First, how good have drug forecasts been historically? And, more specifically, how good have estimates from sell-side analysts been at predicting the future? Second, what type of error has typically been implicated in the misses? Third, is there any type of drug that has been historically more easy or difficult to forecast?

The answer to the first question is "Not very good at all". They looked at drug launches from 2002-2011, a period which furnished hundreds of sales forecasts to work from. Over 60% of the consensus forecasts were wrong by 40% or more. Stop and think about that for a minute - and if you're in the industry, stop and think about the times you've seen these predictions made inside your own company. Remember how polished the PowerPoint slides were? How high-up the person was presenting them? How confident their voice was as they showed the numbers? All for nothing. If these figures had honest error bars on them, they'd stretch up and down the height of any useful chart. I'm reminded of what Fred Schwed had to say in Where Are the Customers' Yachts about stock market forecasts: "Concerning these predictions, we are about to ask: 1. Are they pretty good? 2. Are they slightly good? 3. Are they any damn good at all? 4. How do they compare with tomorrow's weather prediction you read in the paper? 5. How do they compare with the tipster horse race services?".
Forecasts.png
As you can see from the figure, the distribution of errors is quite funny-looking. If you start from the left-hand lowball side, you think you're going to be looking at a rough Gaussian curve, but then wham - it drops off, until you get to the wildly overoptimistic bin, which shows you that there's a terribly long tail stretching into the we're-gonna-be-rich category. This chart says a lot about human psychology and our approach to risk, and nothing it says is very complimentary. In case you're wondering, CNS and cardiovascular drugs tended to be overestimated compared to the average, and oncology drugs tended to be underestimated. That latter group is likely due to an underestimation of the possibility of new indications being approved.

Now, those numbers are all derived from forecasts in the year before the drugs launched. But surely things get better once the products got out into the market? Well, there was a trend for lower errors, certainly, but the forecasts were still (for example) off by 40% five years after the launch. The authors also say that forecasts for later drugs in a particular class were no more accurate than the ones for the first-in-class compounds. All of this really, really makes a person want to ask if all that time and effort that goes into this process is doing anyone any good at all.

Writing at Forbes, David Shaywitz (who also draws some lessons here from Taleb's Antifragile) doesn't seem to think that it is, but he doesn't think that anyone is going to want to hear about it:

Unfortunately, the new McKinsey report is unlikely to matter very much. Company forecasters will say their own data are better, and will point to examples of forecasts that happen to get it right. They will emphasize the elaborate methodologies they use, and the powerful algorithms they employ (all real examples from my time in the industry). Consultants, too, will continue to insist they can do it better.

And indeed, one of the first comments that showed up to his piece was from someone who appears to be doing just that. In fact, rather than show any shame about these numbers, plenty of people will see them as a marketing opportunity. But why should anyone believe the pitch? I think that this conclusion from the NRDD paper is a lot closer to reality:

Beware the wisdom of the crowd. The 'consensus' consists of well-compensated, focused professionals who have many years of experience, and we have shown that the consensus is often wrong. There should be no comfort in having one's own forecast being close to the consensus, particularly when millions or billions of dollars are on the line in an investment decision or acquisition situation.

The folks at Popular Science should take note of this. McKinsey Consulting has apparently joined the "War on Expertise"!

Comments (32) + TrackBacks (0) | Category: Business and Markets | Drug Development

September 20, 2013

Prosensa: One Duchenne Therapy Down

Email This Entry

Posted by Derek

In the post here the other day about Duchenne Muscular Dystrophy (DMD) I mentioned two other companies that are looking at transcriptional approaches: Prosensa (with GSK) and Sarepta. They've got antisense-driven exon-skipping mechanisms, rather than PTC's direct read-through one.

Well, Sarepta still does, anyway. Prosensa and GSK just announced clinical data on their agent, drisapersen, and it appears to have missed completely. The primary endpoint was a pretty direct one, total distance walked over six minutes, and they didn't make statistical significance versus placebo. This was over 48 weeks of treatment, and none of the secondary measures showed any signs either, from what I can see. I can't think of any way to spin this in any positive direction at all.

So drisapersen is presumably done. What does this say about Sarepta's candidate, eteplirsen? One the one hand, their major competitor has just been removed from the board. But on the other, their complete failure with such a closely related therapy can't help but raise doubts. I don't know enough about the differences between the two (PK?) to speculate, but it'll be interesting to see if Sarepta's stock zips up today, sells off, or (perhaps) fights to a draw between two groups of investors who are taking this news in very different ways.

That's the delirious fun of biotech investing. And that's just for the shareholders - you can imagine what it feels like to bet your whole company on this sort of thing. . .

Comments (4) + TrackBacks (0) | Category: Business and Markets | Clinical Trials | Drug Development

September 18, 2013

The Arguing Over PTC124 and Duchenne Muscular Dystrophy

Email This Entry

Posted by Derek

Does it matter how a drug works, if it works? PTC Therapeutics seems bent on giving everyone an answer to that question, because there sure seem to be a lot of questions about how ataluren (PTC124), their Duchenne Muscular Dystrophy (DMD) therapy, acts. This article at Nature Biotechnology does an excellent job explaining the details.

Premature "stop" codons in the DNA of DMD patients, particularly in the dystrophin gene, are widely thought to be one of the underlying problems in the disease. (The same mechanism is believed to operate in many other genetic-mutation-driven conditions as well. Ataluren is supposed to promote "read-through" of these to allow the needed protein to be produced anyway. That's not a crazy idea at all - there's been a lot of thought about ways to do that, and several aminoglycoside antibiotics have been shown to work through that mechanism. Of that class, gentamicin has been given several tries in the clinic, to ambiguous effect so far.

So screening for a better enhancer of stop codon read-through seems like it's worth a shot for a disease with so few therapeutic options. PTC did this using a firefly luciferase (Fluc) reporter assay. As with any assay, there are plenty of opportunities to get false positives and false negatives. Firefly luciferase, as a readout, suffers from instability under some conditions. And if its signal is going to wink out on its own, then a compound that stabilizes it will look like a hit in your assay system. Unfortunately, there's no particular market in humans for a compound that just stabilizes firefly luciferase.

That's where the argument is with ataluren. Papers have appeared from a team at the NIH detailing trouble with the FLuc readout. That second paper (open access) goes into great detail about the mechanism, and it's an interesting one. FLuc apparently catalyzes a reaction between PTC124 and ATP, to give a new mixed anhydride adduct that is a powerful inhibitor of the enzyme. The enzyme's normal mechanism involves a reaction between luciferin and ATP, and since luciferin actually looks like something you'd get in a discount small-molecule screening collection, you have to be alert to something like this happening. The inhibitor-FLuc complex keeps the enzyme from degrading, but the new PTC124-derived inhibitor itself is degraded by Coenzyme A - which is present in the assay mixture, too. The end result is more luciferase signal that you expect versus the controls, which looks like a hit from your reporter gene system - but isn't. PTC's scientists have replied to some of these criticisms here.

Just to add more logs to the fire, other groups have reported that PTC124 seems to be effective in restoring read-through for similar nonsense mutations in other genes entirely. But now there's another new paper, this one from a different group at Dundee, claiming that ataluren fails to work through its putative mechanism under a variety of conditions, which would seem to call these results into question as well. Gentamicin works for them, but not PTC124. Here's the new paper's take-away:

In 2007 a drug was developed called PTC124 (latterly known as Ataluren), which was reported to help the ribosome skip over the premature stop, restore production of functional protein, and thereby potentially treat these genetic diseases. In 2009, however, questions were raised about the initial discovery of this drug; PTC124 was shown to interfere with the assay used in its discovery in a way that might be mistaken for genuine activity. As doubts regarding PTC124's efficacy remain unresolved, here we conducted a thorough and systematic investigation of the proposed mechanism of action of PTC124 in a wide array of cell-based assays. We found no evidence of such translational read-through activity for PTC124, suggesting that its development may indeed have been a consequence of the choice of assay used in the drug discovery process.

Now this is a mess, and it's complicated still more by the not-so-impressive performance of PTC124 in the clinic. Here's the Nature Biotechnology article's summary:

In 2008, PTC secured an upfront payment of $100 million from Genzyme (now part of Paris-based Sanofi) in return for rights to the product outside the US and Canada. But the deal was terminated following lackluster data from a phase 2b trial in DMD. Subsequently, a phase 3 trial in cystic fibrosis also failed to reach statistical significance. Because the drug showed signs of efficacy in each indication, however, PTC pressed ahead. A phase 3 trial in DMD is now underway, and a second phase 3 trial in cystic fibrosis will commence shortly.

It should be noted that the read-through drug space has other players in it as well. Prosensa/GSK and Sarepta are in the clinic with competing antisense oligonucleotides targeting a particular exon/mutation combination, although this would probably taken them into other subpopulations of DMD patients than PTC is looking to treat.

If they were to see real efficacy, PTC could have the last laugh here. To get back to the first paragraph of this post, if a compound works, well, the big argument has just been won. The company has in vivo data to show that some gene function is being restored, as well they should (you don't advance a compound to the clinic just on the basis of in vitro assay numbers, no matter how they look). It could be that the compound is a false positive in the original assay but manages to work through some other mechanism, although no one knows what that might be.

But as you can see, opinion is very much divided about whether PTC124 works at all in the real clinical world. If it doesn't, then the various groups detailing trouble with the early assays will have a good case that this compound never should have gotten as far as it did.

Comments (25) + TrackBacks (0) | Category: Biological News | Business and Markets | Drug Assays | Drug Development

September 11, 2013

Merck Does Something. Or Not. Maybe Something Else Instead.

Email This Entry

Posted by Derek

There's some Merck news today, via FiercePharma. First off, their R&D head Roger Perlmutter sat down with some of the most prominent analysts for a chat about the company's direction - and they came out with two completely different stories. Big changes? Minor ones? I wonder if people were taking away what they wanted to hear to confirm what they'd already decided Merck should be doing. Seamus Fernandez, for example, apparently came away saying that he thought a major R&D restructuring was inevitable, but that's what he thought before he sat down. This sort of thing is worth keeping in mind when you hear some Wall St. types (particularly on the "sell side") going on authoritatively about what's happening inside a given company.

The other news is that Merck is handing off one of their oncology programs (the WEE-1 kinase inhibitor MRK-1775) to AstraZeneca. If I were a mean person given to saying unkind things, I'd say that this drug is at least now going to get a lot more money spent on it, because that's what AZ has been famous for. But I'll stick with what John Carroll had to say on Twitter: "So if $MRK thought 1775 was any damn good, would they outlicense it to $AZN?"

Comments (19) + TrackBacks (0) | Category: Business and Markets | Cancer | Drug Development

September 3, 2013

A Drug Delivery Method You Haven't Thought Of

Email This Entry

Posted by Derek

Word came last week that Google Ventures is funding a small outfit called Rani Biotechnology. They're trying to solve a small problem that's caught the attention of a few people now and then: making large protein drugs orally available.

Well, Google has a reputation for bankrolling some long-shot ideas, and any attempt to make proteins available this way is, by definition, a long shot. On Twitter, Andy Biotech sent around a link to this patent, which seems to have some of Rani's approach in it. If so, it's a surprising mixture of high and low tech. The drugs would be administered in a capsule, carefully formulated both chemically and physically. And when the capsule gets down into the small intestine, according to the patent, a spring-loaded mechanism is signaled to release and tiny needles pop out of its sides, delivering the protein cargo through the intestinal wall. To me, this looks less like an oral dosage than an i.v. that you swallow.

But getting that to work is probably easier than figuring out a way to make proteins survive in the gut. One can think of numerous ways that this could go wrong, but in drug development, there are always numerous ways that things could go wrong. The proof will be in the clinic, and I'm glad that someone is willing to pay to find out if this works.

Comments (55) + TrackBacks (0) | Category: Drug Development

August 27, 2013

Promise That Didn't Pan Out

Email This Entry

Posted by Derek

Luke Timmerman has a good piece on a drug (Bexxar) that looked useful, had a lot of time, effort, and money spent on it, but still never made any real headway. GSK has announced that they're ceasing production, and if there are headlines about that, I've missed them. Apparently there were only a few dozen people in the entire US who got the drug at all last year.

When you look at the whole story, there’s no single reason for failure. There were regulatory delays, manufacturing snafus, strong competition, reimbursement challenges, and issues around physician referral patterns.

If this story sounds familiar, it should—there are some striking similarities to what happened more recently with Dendreon’s sipuleucel-T (Provenge). If there’s a lesson here, it’s that cool science and hard medical evidence aren’t enough. When companies fail to understand the markets they are entering, the results can be quite ugly, especially as insurers tighten the screws on reimbursement. If more companies fail to pay proper attention to these issues, you can count on more promising drugs like Bexxar ending up on the industry scrap heap.

Comments (33) + TrackBacks (0) | Category: Business and Markets | Drug Development

August 22, 2013

Too Many Metrics

Email This Entry

Posted by Derek

Here's a new paper from Michael Shultz of Novartis, who is trying to cut through the mass of metrics for new compounds. I cannot resist quoting his opening paragraph, but I do not have a spare two hours to add all the links:

Approximately 15 years ago Lipinski et al. published their seminal work linking molecular properties with oral absorption.1 Since this ‘Big Bang’ of physical property analysis, the universe of parameters, rules and optimization metrics has been expanding at an ever increasing rate (Figure 1).2 Relationships with molecular weight (MW), lipophilicity,3 and 4 ionization state,5 pKa, molecular volume and total polar surface area have been examined.6 Aromatic rings,7 and 8 oxygen atoms, nitrogen atoms, sp3 carbon atoms,9 chiral atoms,9 non-hydrogen atoms, aromatic versus non-hydrogen atoms,10 aromatic atoms minus sp3 carbon atoms,6 and 11 hydrogen bond donors, hydrogen bond acceptors and rotatable bonds12 have been counted and correlated.13 In addition to the rules of five came the rules of 4/40014 and 3/75.15 Medicinal chemists can choose from composite parameters (or efficiency indices) such as ligand efficiency (LE),16 group efficiency (GE), lipophilic efficiency/lipophilic ligand efficiency (LipE17/LLE),18 ligand lipophilicity index (LLEAT),19 ligand efficiency dependent lipophilicity (LELP), fit quality scaled ligand efficiency (LE_scale),20 percentage efficiency index (PEI),21 size independent ligand efficiency (SILE), binding efficiency index (BEI) or surface binding efficiency index (SEI)22 and composite parameters are even now being used in combination.23 Efficiency of binding kinetics has recently been introduced.24 A new trend of anthropomorphizing molecular optimization has occurred as molecular ‘addictions’ and ‘obesity’ have been identified.25 To help medicinal chemists there are guideposts,21 rules of thumb,14 and 26 a property forecast index,27 graphical representations of properties28 such as efficiency maps, atlases,29 ChemGPS,30 traffic lights,31 radar plots,32 Craig plots,33 flower plots,34 egg plots,35 time series plots,36 oral bioavailability graphs,37 face diagrams,28 spider diagrams,38 the golden triangle39 and the golden ratio.40

He must have enjoyed writing that one, if not tracking down all the references. This paper is valuable right from the start just for having gathered all this into one place! But as you read on, you find that he's not too happy with many of these metrics - and since there's no way that they can all be equally correct, or equally useful, he sets himself the task of figuring out which ones we can discard. The last reference in the quoted section below is to the famous "Can a biologist fix a radio?" paper:

While individual composite parameters have been developed to address specific relationships between properties and structural features (e.g. solubility and aromatic ring count) the benefit may be outweighed by the contradictions that arise from utilizing several indices at once or the complexity of adopting and abandoning various metrics depending on the stage of molecular optimization. The average medicinal chemist can be overwhelmed by the ‘analysis fatigue’ that this plethora of new and contradictory tools, rules and visualizations now provide, especially when combined with the increasing number of safety, off-target, physicochemical property and ADME data acquired during optimization efforts. Decision making is impeded when evaluating information that is wrong or excessive and thus should be limited to the absolute minimum and most relevant available.

As Lazebnik described, sometimes the more facts we learn, the less we understand.

And he discards quite a few. All the equations that involve taking the log of potency and dividing by the heavy atom count (HAC), etc., are playing rather loose with the math:

To be valid, LE must remain constant for each heavy atom that changes potency 10-fold. This is not the case as a 15 HAC compound with a pIC50 of 3 does not have the same LE as a 16 HAC compound with a pIC50 of 4 (ΔpIC50 = 1, ΔHAC = 1, ΔLE = 0.07). A 10-fold change in potency per heavy atom does not result in constant LE as defined by Hopkins, nor will it result in a constant SILE, FQ or LLEAT values. These metrics do not mathematically normalize size or potency because they violate the quotient rule of logarithms. To obey this rule and be a valid mathematical function HAC would subtracted from pIC50 and rendered independent of size and reference potency.

Note that he's not recommending that last operation as a guideline, either. Another conceptual problem with plain heavy atom counting is that it treats all atoms the same, but that's clearly an oversimplification. But dividing by some form of molecular weight is an oversimplification, too: a nitrogen differs from an oxygen by a lot more than that 1 mass unit. (This topic came up here a little while back). But oversimplified or not - heck, mathematically valid or not - the question is whether these things help out enough when used as metrics in the real world. And Shultz would argue that they don't. Keeping LE the same (or even raising it) is supposed to be the sign of a successful optimization, but in practice, LE usually degrades. His take on this is that "Since lower ligand efficiency is indicative of both higher and lower probabilities of success (two mutually exclusive states) LE can be invalidated by not correlating with successful optimization."

I think that's too much of a leap - because successful drug programs have had their LE go down during the process, that doesn't mean that this was a necessary condition, or that they should have been aiming for that. Perhaps things would have been even better if they hadn't gone down (although I realize that arguing from things that didn't happen doesn't have much logical force). Try looking at it this way: a large number of successful drug programs have had someone high up in management trying to kill them along the way, as have (obviously) most of the unsuccessful ones. That would mean that upper management decisions to kill a program are also indicative of both higher and lower probabilities of success, and can thus be invalidated, too. Actually, he might be on to something there.

Shultz, though, finds that he's not able to invalidate LipE (or LLE), variously known as ligand-lipophilicity efficiency or lipophilic ligand efficiency. That's p(IC50) - logP, which at least follows the way that logarithms of quotients are supposed to work. And it also has been shown to improve during known drug optimization campaigns. The paper has a thought experiment, on some hypothetical compounds, as well as some data from a tankyrase inhibitor series that seem to show the LipE behave more rationally than other metrics (which sometimes start pointing in opposite directions).

I found the chart below to be quite interesting. It uses the cLogP data from Paul Leeson and Brian Springthorpe's original LLE paper (linked in the above paragraph) to show what change in potency you would expect when you change a hydrogen in your molecule to one of the groups shown if you're going to maintain a constant LipE value. So while hydrophobic groups tend to make things more potent, this puts a number on it. A t-butyl, for example, should make things about 50-fold more potent if it's going to pull its weight as a ball of grease. (Note that we're not talking about effects on PK and tox here, just sheer potency - if you play this game, though, you'd better be prepared to keep an eye on things downstream).
LipE%20chart.png
On the other end of the scale, a methoxy should, in theory, cut your potency roughly in half. If it doesn't, that's a good sign. A morpholine should be three or four times worse, and if it isn't, then it's found something at least marginally useful to do in your compound's binding site. What we're measuring here is the partitioning between your compound wanting to be in solution, and wanting to be in the binding site. More specifically, since logP is in the equation, we're looking at the difference in the partitioning of your compound between octanol and water, versus its partitioning between the target protein and water. I think we can all agree that we'd rather have compounds that bind because they like something about the active site, rather than just fleeing the solution phase.

So in light of this paper, I'm rethinking my ligand-efficiency metrics. I'm still grappling with how LipE performs down at the fragment end of the molecular weight scale, and would be glad to hear thoughts on that. But Shultz's paper, if it can get us to toss out a lot of the proposed metrics already in the literature, will have done us all a service.

Comments (38) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | Pharmacokinetics

August 19, 2013

Is The FDA the Problem?

Email This Entry

Posted by Derek

A reader sends along this account of some speakers at last year's investment symposium from Agora Financial. One of the speakers was Juan Enriquez, and I thought that readers here might be interested in his perspective.

First, the facts. According to Enriquez:

Today, it costs 100,000 times less than it once did to create a three-dimensional map of a disease-causing protein

There are about 300 times more of these disease proteins in databases now than in times past

The number of drug-like chemicals per researcher has increased 800 times

The cost to test a drug versus a protein has decreased ten-fold

The technology to conduct these tests has gotten much quicker
Now here’s Enriquez’s simple question:

"Given all these advances, why haven’t we cured cancer yet? Why haven’t we cured Alzheimer’s? Why haven’t we cured Parkinson’s?"

The answer likely lies in the bloated process and downright hostile-to-innovation climate for FDA drug approvals in this day and age...

According to Enriquez, this climate has gotten so bad that major pharmaceuticals companies have begun shifting their primary focus from R&D of new drugs to increased marketing of existing drugs — and mergers and acquisitions.

I have a problem with this point of view, assuming that it's been reported correctly. I'll interpret this as makes-a-good-speech exaggeration, but Enriquez himself has most certainly been around enough to realize that the advances that he speaks of are not, by themselves, enough to lead to a shower of new therapies. That's a theme that has come up on this site several times, as well it might. I continue to think that if you could climb in a time machine and go back to, say, 1980 with these kinds of numbers (genomes sequenced, genes annotated, proteins with solved structures, biochemical pathways identified, etc.), that everyone would assume that we'd be further along, medically, than we really are by now. Surely that sort of detailed knowledge would have solved some of the major problems? More specifically, I become more sure every year that drug discovery groups of that era might be especially taken aback at how the new era of target-based molecular-biology-driven drug research has ended up working out: as a much harder proposition than many might have thought.

So it's a little disturbing to see the line taken above. In effect, it's saying that yes, all these advances have been enough to release a flood of new therapies, which means that there must be something holding them back (in this case, apparently, the FDA). The thing is, the FDA probably has slowed things down - in fact, I'd say it almost certainly has. That's part of their job, insofar as the slowdowns are in the cause of safety.

And now we enter the arguing zone. On the one side, you have the reducio ad absurdum argument that yes, we'd have a lot more things figured out if we could just go directly into humans with our drug candidates instead of into mice, so why don't we just? (That's certainly true, as far as it goes. We would surely kill off a fair number of people doing things that way, as the price of progress, but (more) progress there would almost certainly be. But no one - no one outside of North Korea, anyway - is seriously proposing this style of drug discovery. Someone who agrees with Enriquez's position would regard it as a ridiculous misperception of what they're calling for, designed to make them look stupid and heartless.

But I think that Enriquez's speech, as reported, is the ad absurdum in the other direction. The idea that the FDA is the whole problem is also an oversimplification. In most of these areas, the explosion of knowledge laid out above has not yet let to an explosion of understanding. You'd get the idea that there was this big region of unexplored stuff, and now we've pretty much explored it, so we should really be ready to get things done. But the reality, as I see it, as that there was this big region of unexplored stuff, and we set into to explore it, and found out that it was far bigger than we'd even dreamed. It's easy to get your scale of measurement wrong. It's quite similar to the way that humanity didn't realize just how large the Earth was, then how small it was compared to the solar system (and how off-center), and how non-special our sun was in the immensity of the galaxy, not to mention how many other galaxies there are and how far away they lie. Biology and biochemistry aren't quite on that scale of immensity, but they're plenty big enough.

Now, when I mentioned that we'd surely have killed off more people by doing drug research by the more direct routes, the reply is that we've been killing people off by moving too slowly as well. That's a valid argument. But under the current system, we choose to have people die passively, through mechanisms of disease that are already operating, while under the full-speed-ahead approaches, we might lower that number by instead killing off some others in a more active manner. It's typically human of us to choose the former strategy. The big questions are how many people would die in each category as we moved up and down the range between the two extremes, and what level of each casualty count we'd find "acceptable".

So while it's not crazy to say that we should be less risk-averse, I think it is silly to say that the FDA is the only (or even main) thing holding us back. I think that this has a tendency to bring on both unnecessary anger directed at the agency, and raise unfulfillable hopes in regards to what the industry can do in the near term. Neither of those seem useful to me.

Full disclosure - I've met Enriquez, three years ago at SciFoo. I'd be glad to give him a spot to amplify and extend his remarks if he'd like one.

Comments (40) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Regulatory Affairs

August 14, 2013

A Regeneron Profile

Email This Entry

Posted by Derek

In the spirit of this article about Regeneron, here's a profile in Forbes of the company's George Yancopoulos and Leonard Schleifer. There are several interesting things in there, such as these lessons from Roy Vagelos (when he became Regeneron's chairman after retiring from Merck):

Lesson one: Stop betting on drugs when you won’t have any clues they work until you finish clinical trials. (That ruled out expanding into neuroscience–and is one of the main reasons other companies are abandoning ailments like Alzheimer’s.) Lesson two: Stop focusing only on the early stages of drug discovery and ignoring the later stages of human testing. It’s not enough to get it perfect in a petri dish. Regeneron became focused on mitigating the two reasons that drugs fail: Either the biology of the targeted disease is not understood or the drug does something that isn’t expected and causes side effects.

They're not the only ones thinking this way, of course, but if you're not, you're likely to run into big (and expensive) trouble.

Comments (14) + TrackBacks (0) | Category: Drug Development | Drug Industry History

August 13, 2013

Druggability: A Philosophical Investigation

Email This Entry

Posted by Derek

I had a very interesting email the other day, and my reply to it started getting so long that I thought I'd just turn it into a blog post. Here's the question:

How long can we expect to keep finding new drugs?

By way of analogy, consider software development. In general, it's pretty hard to think of a computer-based task that you couldn't write a program to do, at least in principle. It may be expensive, or may be unreasonably slow, but physical possibility implies that a program exists to accomplish it.

Engineering is similar. If it's physically possible to do something, I can, in principle, build a machine to do it.

But it doesn't seem obvious that the same holds true for drug development. Something being physically possible (removing plaque from arteries, killing all cancerous cells, etc.) doesn't seem like it would guarantee that a drug will exist to accomplish it. No matter how much we'd like a drug for Alzheimer's, it's possible that there simply isn't one.

Is this accurate? Or is the language of chemistry expressive enough that if you can imagine a chemical solution to something, it (in principle) exists. (I don't really have a hard and fast definition of 'drug' here. Obviously all bets are off if your 'drug' is complicated enough to act like a living thing.)

And if it is accurate, what does that say about the long-term prospects for the drug industry? Is there any risk of "running out" of new drugs? Is drug discovery destined to be a stepping-stone until more advanced medical techniques are available?

That's an interesting philosophical point, and one that had never occurred to me in quite that way. I think that's because programming is much more of a branch of mathematics. If you've got a Universal Turing Machine and enough tape to run through it, then you can, in theory, run any program that ever could be run. And any process that can be broken down into handling ones and zeros can be the subject of a program, so the Church-Turing thesis would say that yes, you can calculate it.

But biochemistry is most definitely a different thing, and this is where a lot of people who come into it from the math/CS/engineering side run into trouble. There's a famous (infamous) essay called "Can A Biologist Fix A Radio" that illustrates the point well. The author actually has some good arguments, and some legitimate complaints about the way biochemistry/molecular biology has been approached. But I think that his thesis breaks down eventually, and I've been thinking on and off for years about just where that happens and how to explain what makes things go haywire. My best guess is algorithmic complexity. It's very hard to reduce the behavior of biochemical systems to mathematical formalism. The whole point of formal notation is to express things in the most compact and information-rich way possible, but trying to compress biochemistry in this manner doesn't give you much of an advantage, at least not in the ways we've tried to do it so far.

To get back to the question at hand, let's get philosophical. I'd say that at the most macro level, there are solutions to all the medical problems. After all, we have the example of people who don't have multiple sclerosis, who don't have malaria, who don't have diabetes or pancreatic cancer or what have you. We know that there are biochemical states where these things do not exist; the problem is then to get an individual patient's state back to that situation. Note that this argument does not apply to things like life extension, limb regeneration, and so on: we don't know if humans are capable of these things or not yet, even if there may be some good arguments to be made in their favor. But we know that there are human brains without Alzheimer's.

To move down a level from this, though, the next question is whether there are ways to put a patient's cells and organs back into a disease-free state. In some cases, I think that the answer has to be, for all practical purposes, "No". I tend to think that the later stages of Alzheimer's (for example) are in fact incurable. Neurons are dead and damaged, what was contained in them and in their arrangement is gone, and any repair system can only go so far. Too much information has been lost and too much entropy has been let in. I would like to be wrong about this, but I don't think I am.

But for less severe states and diseases, you can imagine various interventions - chemical, surgical, genetic - that could restore things. So the question here becomes whether there are drug-like solutions. The answer is tricky. If you look at a biochemical mechanism and can see that there's a particular pathway involving small molecules, then certainly, you can say that there could be a molecule to be found as a treatment, even if we haven't found it yet. But the first part of that last sentence has to be unpacked.

Take diabetes. Type I diabetes is proximately caused by lack of insulin, so the solution is to take insulin. And that works, although it's certainly not a cure, since you have to take insuin for the rest of your life, and it's impossible to take it in a way that perfectly mimics the way your body would adminster it, etc. A cure would be to have working beta-cells again that respond just the way they're supposed to, and that's less likely to be achieved through a drug therapy. (Although you could imagine some small molecule that affects a certain class of stem cell, causing it to start the program to differentiate into a fully-formed beta cell, and so on). You'd also want to know why the original population of cells died in the first place, and how to keep that from happening again, which might also take you to some immunological and cell-cycle pathways that could be modulated by drug molecules. But all of these avenues might just as easily take you into genetically modified cloned cell lines and surgical implantation, too, rather than anything involving small-molecule chemistry.

Here's another level of complexity, then: insulin is certainly a drug, but it's not a small molecule of the kind I'd be making. Is there a small molecular that can replace it? You'd do very well with that indeed, but the answer (I think) is "probably not". If you look at the receptor proteins that insulin binds to, the recognition surfaces that are used are probably larger than small molecules can mimic. No one's ever found a small molecule insulin mimetic, and I don't think anyone is likely to. (On the other hand, if you're trying to disrupt a protein-protein interaction, you have more hope, although that's still an extremely difficult target. We can disrupt things a lot more easily than we can make them work). Even if you found a small-molecule-insulin, you'd be faced with the problem of dosing it appropriately, which is no small challenge for a tightly and continuously regulated system like that one. (It's no small challenge for administering insulin itself, either).

And even for mechanisms that do involve small-molecule signaling, like the G-protein coupled receptors, there are still things to worry about. Take schizophrenia. You can definitely see problems with neural systems in the brain when you study that disease, and these neurons respond to, among other things, small-molecue neurotransmitters that the body makes and uses itself - dopamine, serotonin, acetylcholine and others. There are a certain number of receptors for each of those, and although we don't have all the combinations yet, I could imagine, on a philosophical level, that we could eventually have selective drugs that are agonists, antagonists, partial agonists, inverse agonists, what have you at all the subtypes. We have quite a few of them now, for some of the families. And I can even imagine that we could eventually have most or all of the combinations: a molecule that's a dopamine D2 agonist and a muscarinic M4 antagonist, all in one, and so on and so on. That's a lot more of a stretch, to be honest, but I'll stipulate that it's possible.

So you have them all. Now, which ones do you give to help a schizophrenic? We don't know. We have guesses and theories, but most of them are surely wrong. Every biochemical theory about schizophrenia is either wrong or incomplete. We don't know what goes wrong, or why, or how, or what might be done to bend things back in the right direction. It might be that we're in the same area as Alzheimer's: perhaps once a person's brain has developed in such a way that it slips into schizophrenia, that there is no way at all to rewire things, in the same way that we can't ungrow a tree in order to change the shape of its canopy. I've no idea, and we're going to know a lot more about the brain by the time we can answer that one.

So one problem with answering this question is that it's bounded not so much by chemistry as by biology. Lots and lots of biology, most of it unknown. But thinking in terms of sheer chemistry is interesting, too. Consider "The Library of Babel", the famous story by Jorge Luis Borges. It takes place in some sort of universe that is no more (and no less) than a vast library containing every possible book that can be be produced with a 25-character set of letters and punctuation marks. This is, as a bit of reflection will show, a very, very large number, one large enough to contain everything that can possibly be written down. And all the slight variations. And all the misprints. And all the scrambled coded versions of everything, and so on and so on. (W. v. O. Quine extended this idea to binary coding, which brings you back to computability).

Now think about the universe of drug-like molecules. It is also very large, although it is absolutely insignificant compared to the terrifying Library of Babel. (It's worth noting that the Library contains all of the molecules that can ever exist, coded in SMILES strings - that thought just occurred to me at this very moment, and gives me the shivers). The universe of proteins works that way, too - an alphabet of twenty-odd letters for amino acids gives you the exact same situation as the Library, and if you imagine some hideous notation for coding in all the folding variants and post-translational modifications, all the proteins are written down as well.

These, then, encompass everything chemical compound up to some arbitrary size, and the original question is, is this enough? Are there questions for which none of these words are the answer? That takes you into even colder and deeper philosophical waters. Wittgenstein (among many others) wondered the same thing about our own human languages, and seems to have decided that there are indeed things that cannot be expressed, and that this marks the boundary of philosophy itself. Famously, his Tractacus ends with the line "Wovon man nicht sprechen kann, darüber muss man schweigen": whereof we cannot speak, we must pass over in silence.

We're not at that point in the language of chemistry and pharmacology yet, and it's going to be a long, long time before we ever might be. Just the fact, though, that computability seems like such a more reasonable proposition in computer science than druggability does in biochemistry tells you a great deal about how different the two fields are.

Update: On the subject of computabiity, I'm not sure how I missed the chance to bring Gödel's Incompleteness Theorem into this, just to make it a complete stewpot of math and philosophy. But the comments to this post point out that even if you can write a program, you cannot be sure whether it will ever finish the calculation. This Halting Problem is one of the first things ever to be proved formally undecidable, and the issues it raises are very close to those explored by Gödel. But as I understand it, this is decidable for a machine with a finite amount of memory, running a deterministic program. The problem is, though, that it still might take longer than the expected lifetime of the universe to "halt", which leaves you, for, uh, practical purposes, in pretty much the same place as before. This is getting pretty far afield from questions of druggability, though. I think.

Comments (40) + TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico

August 12, 2013

How Much to Develop a Drug? An Update.

Email This Entry

Posted by Derek

I've referenced this Matthew Herper piece on the cost of drug development several times over the last few years. It's the one where he totaled up pharma company R&D expenditures (from their own financial statements) and then just divided that by the number of drugs produced. Crude, but effective - and what it said was that some companies were spending ridiculous, unsustainable amounts of money for what they were getting back.

Now he's updated his analysis, looking at a much longer list of companies (98 of them!) over the past ten years. Here's the list, in a separate post. Abbott is at the top, but that's misleading, since they spent R&D money on medical devices and the like, whose approvals don't show up in the denominator.

But that's not the case for #2, Sanofi: 6 drugs approved during that time span, at a cost, on their books of ten billion dollars per drug. Then you have (as some of you will have guessed) AstraZeneca - four drugs at 9.5 billion per. Roche, Pfizer, Wyeth, Lilly, Bayer, Novartis and Takeda round out the top ten, and even by that point we're still looking at six billion a whack. One large company that stand out, though, is Bristol-Myers Squibb, coming in at #22, 3.3 billion per drug. The bottom part of the list is mostly smaller companies, often with one approval in the past ten years, and that one done reasonably cheaply. But three others that stand out as having spent significant amounts of money, while getting something back for it, are Genzyme, Shire, and Regeneron. Genzyme, of course, has now been subsumed in that blazing bonfire of R&D cash known as Sanofi, so that takes care of that.

Sixty-six of the 98 companies studied launched only one drug this decade. The costs borne by these companies can be taken as a rough estimate of what it takes to develop a single drug. The median cost per drug for these singletons was $350 million. But for companies that approve more drugs, the cost per drug goes up – way up – until it hits $5.5 billion for companies that have brought to market between eight and 13 medicines over a decade.

And he's right on target with the reason why: the one-approval companies on the list were, for the most part, lucky the first time out. They don't have failures on their books yet. But the larger organizations have had plenty of those to go along with the occasional successes. You can look at this situation more than one way - if the single-drug companies are an indicator of what it costs to get one drug discovered and approved, then the median figure is about $350 million. But keep in mind that these smaller companies can tend to go after a different subset of potential drugs. They're a bit more likely to pick things with a shorter, more defined clinical path, even if there isn't as big a market at the end, in order to have a better story for their investors.

Looking at what a single successful drug costs, though, isn't a very good way to prepare for running a drug company. Remember, the only small companies on this list are the ones that have suceeded, and many, many more of them spent all their money on their one shot and didn't make it. That's what's reflected in the dollars-per-drug figures for the larger organizations, that and the various penalties for being a huge organization. As Herper says:

Size has a cost. The data support the idea that large companies may be spend more per drug than small ones. Companies that spent more than $20 billion in R&D over the decade spent $6.3 billion per new drug, compared to $2.8 billion for those that had budgets of between $5 billion and $10 billion. Some CEOs, notably Christopher Viehbacher at Sanofi, have faced low R&D productivity in part by cutting the budget. This may make sense in light of this data. But it is worth noting that the bigger firms brought twice as many drugs to market. It still could be that the difference between these two groups is due to smaller companies not bearing the full financial weight of the risk of failure.

There are other factors that kick these numbers around a bit. As Herper points out, there's a tax advantage for R&D expenditures, so there's no incentive to under-report them (but there's also an IRS to keep you from going wild over-reporting them, too). And some of the small companies on the list picked up their successes by taking on failed programs from larger outfits, letting them spend a chunk of R&D cash on the drugs beforehand. But overall, the picture is just about as grim as you'd have figured, if not a good deal more so. Our best hope is that this is a snapshot of the past, and not a look into the future. Because we can't go on like this.

Comments (33) + TrackBacks (0) | Category: Drug Development | Drug Industry History

August 9, 2013

An Interview With A GSK Shanghai Scientist

Email This Entry

Posted by Derek

Here's an interview with Liu Xeubin, formerly of GlaxoSmithKline in China. That prospect should perk up the ears of anyone who's been following the company's various problems and scandals in that country.

Liu Xuebin recalls working 12-hour shifts and most weekends for months, under pressure to announce research results that would distinguish his GlaxoSmithKline Plc (GSK) lab in China as a force in multiple sclerosis research.
It paid off -- for a while. Nature Medicine published findings about a potential new MS treatment approach in January 2010 and months later Liu was promoted to associate director of Glaxo’s global center for neuro-inflammation research in Shanghai. Two months ago, his career unraveled. An internal review found data in the paper was misrepresented. Liu, 45, who stands by the study, was suspended from duty on June 8 and quit two days later.

Liu was the first author on the disputed paper, but he says that he stands by it, and opposed a retraction (only he and one other author, out of 18, did so). He had been at the NIH for several years before being hired back to Shanghai by Glaxo, which turned out to be something of a change:

“This was my first job in industry and there was a very different culture,” Liu said behind thick, rimless glasses and dressed in a short-sleeve checked shirt tucked neatly into his belted trousers. “I was also not experienced with compliance back then, and we didn’t pay enough attention to things such as recording of reports from our collaborators.”

There was also a culture in which Glaxo scientists were grouped into competitive teams, known as discovery performance units, which vied internally for funds every three years, he said. Those who failed to meet certain targets risked being disbanded.

What I find odd is Liu's emphasis on publishing, and publishing first. That seems like a very academic mindset - I have to tell you, over my time in industry, rarely have I ever felt a sense of urgency to publish my results in a journal. And even those exceptions have been for other reasons, usually the "If we're going to write this stuff up, now's the time" sort. Never have I felt that we were racing to get something into, say, Nature Medicine before someone else did. Getting something patented before someone else, into the clinic before someone else? Oh, yes indeed. But not into some journal.

But neither have I been part of a far-flung research site, on which a lot of money had been spent, trying to show that it was all worthwhile. Maybe that's the difference. Even so, if the results that the Shanghai group got were really important for an approach to multiple sclerosis therapy, that's all the more reason why the findings should have spoken for themselves inside the company (and been the subject of immediate further development, too). We don't have to get Nature Medicine (or whoever) to validate things for us: "Oh, wow, that stuff must be real, the journal accepted our paper". A company doesn't demonstrate that it finds something valuable by sending it out to a big-name journal, at least not at first: it does that by spending more time and money on the idea.

But Liu doesn't talk the way that I would expect in this article, and I feel sure that the Bloomberg reporter on this piece didn't pick up on it. There's no "We delivered a new MS program, we validated a whole new group of drug targets, we identified a high-profile clinical candidate that went immediately into development". That's how someone in drug R&D would put it. Not "We were racing to publish our results". It's all quite odd.

Comments (18) + TrackBacks (0) | Category: Drug Development | The Dark Side

July 22, 2013

The NIH's Drug Repurposing Program Gets Going

Email This Entry

Posted by Derek

Here's an update on the NIH's NCATS program to repurpose failed clinical candidates from the drug industry. I wrote about this effort here last year, and expressed some skepticism. It's not that I think that trying drugs (or near-drugs) for other purposes is a bad idea prima facie, because it isn't. I just wonder about the way the way the NIH is talking about this, versus its chances for success.

As was pointed out last time this topic came up, the number of failed clinical candidates involved in this effort is dwarfed by the number of approved compounds that could also be repurposed - and have, in fact, been looked at for years for just that purpose. The success rate is not zero, but it has not been a four-lane shortcut to the promised land, either. And the money involved here ($12.7 million split between nine grants) is, as that Nature piece correctly says, "not much". Especially when you're going after something like Alzheimer's:

Strittmatter’s team is one of nine that won funding last month from the NIH’s National Center for Advancing Translational Sciences (NCATS) in Bethesda, Maryland, to see whether abandoned drugs can be aimed at new targets. Strittmatter, a neuro­biologist at Yale University in New Haven, Connecticut, hopes that a failed cancer drug called saracatinib can block an enzyme implicated in Alzheimer’s. . .

. . .Saracatinib inhibits the Src family kinases (SFKs), enzymes that are commonly activated in cancer cells, and was first developed by London-based pharmaceutical company Astra­Zeneca. But the drug proved only marginally effective against cancer, and the company abandoned it — after spending millions of dollars to develop it through early human trials that proved that it was safe. With that work already done, Strittmatter’s group will be able to move the drug quickly into testing in people with early-stage Alzheimer’s disease.

The team plans to begin a 24-person safety and dosing trial in August. If the results are good, NCATS will fund the effort for two more years, during which the scientists will launch a double-blind, randomized, placebo-controlled trial with 159 participants. Over a year, the team will measure declines in glucose metabolism — a marker for progression of Alzheimer’s disease — in key brain regions, hoping to find that they have slowed.

If you want some saracatanib, you can buy some, by the way (that's just one of the suppliers). And since AZ has already taken this through phase I, then the chances for it passing another Phase I are very good indeed. I will not be impressed by any press releases at that point. The next step, the Phase IIa with 159 people, is as far as this program is mandated to go. But how far is that? One year is not very long in a population of Alzheimer's patients, and 159 patients is not all that many in a disease that heterogeneous. And the whole trial is looking at a secondary marker (glucose metabolism) which (to the best of my knowledge) has not demonstrated any clinical utility as a measure of efficacy for the disease. From what I know about the field, getting someone at that point to put up the big money for larger trials will not be an easy sell.

I understand the impulse to go after Alzheimer's - who dares, wins, eh? But given the amount of money available here, I think the chances for success would be better against almost any other disease. It is very possible to take a promising-looking Alzheimer's candidate all the way through a multi-thousand-patient multiyear Phase III and still wipe out - ask Eli Lilly, among many others. You'd hope that at least a few of them are in areas where there's a shorter, more definitive clinical readout.

Here's the list, and here's the list of all the compounds that have been made available to the whole effort so far. Update: structures here. The press conference announcing the first nine awards is here. The NIH has not announced what the exact compounds are for all the grants, but I'm willing to piece it together myself. Here's what I have:

One of them is saracatanib again, this time for lymphangioleiomyomatosis. There's also an ER-beta agonist being looked at for schizophrenia, a J&J/Janssen nicotinic allosteric modulator for smoking cessation, and a Pfizer ghrelin antagonist for alcoholism (maybe from this series?). There's a Sanofi compound for Duchenne muscular dystrophy, which the NIH has studiously avoided naming, although it's tempting to speculate that it's riferminogene pecaplasmide, a gene-therapy vector for FGF1. But Genetic Engineering News says that there are only seven compounds, with a Sanofi one doubling up as well as the AZ kinase inhibitor, so maybe this one is the ACAT inhibitor below. Makes more sense than a small amount of money trying to advance a gene therapy approach, for sure.

There's an endothelin antagonist for peripheral artery disease. Another unnamed Sanofi compound is being studied for calcific aortic valve stenosis, and my guess is that it's canosimibe, an ACAT inhibitor, since that enzyme has recently been linked to stenosis and heart disease. Finally, there's a Pfizer glycine transport inhibitor being looked at for schizophrenia, which seems a bit odd, because I was under the impression that this compound had already failed in the clinic for that indication. They appear to have some other angle.

So there you have it. I look forward to seeing what comes of this effort, and also to hearing what the NIH will have to say at that point. We'll check in when the time comes!

Update: here's more from Collaborative Chemistry. And here's a paper they published on the problems of identifying compounds for initiatives like this:

In particular, it is notable that NCATS provides on its website [31] only the code number, selected international non-proprietary names (INN) and links to more information including mechanism of action, original development indication, route of administration and formulation availability. However, the molecular structures corresponding to the company code numbers were not included. Although we are highly supportive of the efforts of NCATS to promote drug repurposing in the context of facilitating and funding proposals, we find this omission difficult to understand for a number of reasons. . .

They're calling for the NIH (and the UK initiative in this area as well) to provide real structures and IDs for the compounds they're working with. It's hard to argue against it!

Comments (8) + TrackBacks (0) | Category: Academia (vs. Industry) | Clinical Trials | Drug Development

July 19, 2013

Salary Freeze at Lilly

Email This Entry

Posted by Derek

We now return to our regularly schedule program around here - or at least, Eli Lilly is now returning to theirs. The company announced that they're freezing salaries for most of the work force, in an attempt to save hundreds of millions of dollars in advance of their big patent expirations. Some bonuses will be reduced as well, they say, but that leaves a lot of room. Higher-ups don't look for increases in base pay as much as they look for bonuses, options, and restricted shares (although, to be fair, these are often awarded as a per cent of salary).

‘‘This action is necessary to withstand the impact of upcoming patent expirations and to support the launch of our large phase III pipeline,’’ Chief Executive Officer John Lechleiter, 59, said in a letter to employees today, a copy of which was obtained by Bloomberg. ‘‘The current situation requires us to take the appropriate action now to secure our company’s future. We can’t allow ourselves to let up and fail to make the tough choices.”

Lechleiter himself has not had a raise since 2010, it appears, although I'm not sure if his non-salary compensation follows the same trend. If anyone has the time to dig through the company's last few proxy statements, feel free, but actually figuring out what a chief executive is really paid is surprisingly difficult. (I remember an article a few years ago where several accountants and analysts were handed the same batch of SEC filings and all of them came out with different compensation numbers).

But there's not doubt that Lilly is in for it, something that has been clear for some time now. The company's attempts to shore up its clinical pipeline haven't gone well, and it looks like (more and more) they're putting a lot of their hopes on a success in Alzheimer's. If they see anything, that will definitely turn the whole situation around - between their diagnostic branch and a new therapeutic, they'll own the field, and a huge field it is. But the odds of this happening are quite low. The most likely outcome, it seems to me, is equivocal data that will be used to put pressure on the FDA, etc., to approve something, anything, for Alzheimer's.

It's worth remembering that it wasn't very long ago at all that the higher-ups at Lilly were telling everyone that all would be well, that they'd be cranking out two big new drugs a year by now. Hasn't happened. Since that 2010 article, they've had pretty much squat - well, Jentadueto, which is Boehringer Ingleheim's linagliptin, which Lilly is co-marketing, with metformin added. Earlier this year, they were talking up plans for five regulatory submissions in the near future, but that figure is off now that enzastaurin has already bombed in Phase III. Empagliflozin and ramucirumab are still very much alive, but will be entering crowded markets if they make it through. Dulaglutide is holding up well, though.

But will these be enough to keep Lilly from getting into trouble? That salary freeze is your answer: no, they will not. All the stops must be pulled out, and the ones after this will be even less enjoyable.

Comments (19) + TrackBacks (0) | Category: Business and Markets | Drug Development

Good Advice: Get Lost!

Email This Entry

Posted by Derek

I thought everyone could use something inspirational after the sorts of stories that have been in the news the last few days. Here's a piece at FierceBiotech on Regeneron, a company that's actually doing very well and expanding. And how have they done it?

Regeneron CEO Dr. Leonard "Len" Schleifer, who founded the company in 1988, says he takes pride in the fact that his team is known for doing "zero" acquisitions. All 11 drugs in the company's clinical-stage pipeline stem from in-house discoveries. He prefers a science-first approach to running a biotech company, hiring Yancopoulos to run R&D in 1989, and he endorsed a 2012 pay package for the chief scientist that was more than twice the size of his own compensation last year.

Scientists run Regeneron. Like Yancopoulos, Schleifer is an Ivy League academic scientist turned biotech executive. Regeneron gained early scientific credibility with a 1990 paper in the journal Science on cloning neurotrophin factor, a research area that was part of a partnership with industry giant Amgen. Schleifer has recruited three Nobel Prize-winning scientists to the board of directors, which is led by long-time company Chairman Dr. P. Roy Vagelos, who had a hand in discovering the first statin and delivering a breakthrough treatment for a parasitic cause of blindness to patients in Africa.

"I remember these people from Pfizer used to go around telling us, 'You know, blockbusters aren't discovered, they're made,' as though commercial people made the blockbuster," Schleifer said in an interview. "Well, get lost. Science, science, science--that's what this business is about."

I don't know about you, but that cheers me up. That kind of attitude always does!

Comments (10) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

July 17, 2013

The GSK Jackpot

Email This Entry

Posted by Derek

Well, this got my attention: according to the Sunday Times, GlaxoSmithKline is preparing to hand out hefty bonus payments to scientists if they have a compound approved for sale. Hefty, in this context, means up to several million dollars. The earlier (and much smaller) payouts for milestones along the way will disappear, apparently, to be replaced by this jackpot.

The article says that "The company will determine who is entitled to share in the payout by judging which staff were key to its discovery and development", and won't that be fun? In Germany, the law is that inventors on a corporate patent do get a share of the profits, which can be quite lucrative, but it means that there are some very pointed exchanges about just who gets to be an inventor. The prospect of million-dollar bonuses will be very welcome, but will not bring the best in some people, either. (It's not clear to me, though, if these amounts are to be split up among people somehow, or if single individuals can possibly expect that much).

John LaMattina has some thoughts on this idea here. He's also wondering how to assign credit:

I am all for recognizing scientists in this way. After all, they must be successful in order for a company the size of GSK to have a sustaining pipeline. However, the drug R&D process is really a team effort and not driven by an individual. The inventor whose name is on the patent is generally the chemist or chemists who designed the molecule that had the necessary biological activity. Rarely, however, are chemists the major contributor to the program’s success. Oftentimes, it is a biologist who conceives the essence of the program by the scientific insight he or she might have. The discovery of Pfizer’s Xeljanz is such a case. There have been major classes of drugs that have been saved by toxicologists who ran insightful animal experiments to explain aberrant events in rats as was done by Merck with both the statins and proton-pump inhibitors – two of the biggest selling classes of drugs of all time.

On occasion, the key person in a drug program is the process chemist who has designed a synthesis of the drug that is amenable to the large scales of material needed to conduct clinical trials. Clinical trial design can also be crucial, particularly when studying a drug with a totally new mechanism of action. A faulty trial design can kill any program. Even a nurse involved in the testing of a drug can make the key discovery, as happened in Pfizer’s phase 1 program with Viagra, where the nurse monitoring the patients noticed that the drug was enhancing blood flow to an organ other than the heart. To paraphrase Hilary Clinton, it takes a village to discover and develop a drug.

You could end up with a situation where the battery is arguing with the drive shaft, both of whom are shouting at the fuel pump and refusing to speak to the tires, all because there was a reward for whichever one of them was the key to getting the car to go down the driveway.

There's another problem - getting a compound to go all the way to the market involves a lot of luck as well. No one likes to talk about that very much - it's in everyone's interest to show how it was really due to their hard work and intelligence - but equal amounts of hard work and brainpower go into projects that just don't make it. Those are necessary, but not sufficient. So if GSK is trying to put this up as an incentive, it's only partially coupled to factors that the people it's aimed at can influence.

And as LaMattina points out, the time delay in getting drugs approved is another factor. If I discover a great new compound today, I'll be lucky to see it on the market by, say, 2024 or so. I have no objection to someone paying me a million dollars on that date, but it won't have much to do with what I've been up to in the interim. And in many cases, some of the people you'd want to reward aren't even with the company by the time the drug makes it through, anyway. So while I cannot object to drug companies wanting to hand out big money to their scientists, I'm not sure what it will accomplish.

Comments (71) + TrackBacks (0) | Category: Business and Markets | Drug Development | Who Discovers and Why

July 11, 2013

The Last PPAR Compound?

Email This Entry

Posted by Derek

Roche has announced that they're halting trials of aleglitazar, a long-running investigational drug in their diabetes portfolio. I'm noting this because I think that this might be the absolute last of the PPAR ligands to fail in the clinic. And boy howdy, has it been a long list. Merck, Lilly, Kyorin, Bristol-Myers Squibb, Novo Nordisk, GlaxoSmithKline, and Bayer are just the companies I know right off the top of my head that have had clinical failures in this area, and I'm sure that there are plenty more. Some of those companies (GSK, for sure) have had multiple clinical candidates go down, so the damage is even worse than it appears.

That why I nominated this class in the Clinical Futility Awards earlier this summer. Three PPAR compounds actually made it to market, but the record has not been happy there, either. Troglitazone was pulled early, Avandia (rosiglitazone) has (after a strong start) been famously troubled, and Actos (pioglitazone) has its problems, too.

The thing is, no one knows about all this, unless they follow biomedical research in some detail. Uncounted billions have been washed through the grates; years and years of work involving thousands of people has come to nothing. The opportunity costs, in retrospect, are staggering. So much time, effort, and money could have been spent on something else, but there was no way to know that without spending it all. There never really is.

I return to this theme around here every so often, because I think it's an important one. The general public hears about the drugs that we get approved, because we make a big deal out of them. But the failures, for the most part, are no louder than the leaves falling from the trees. They pass unnoticed. Most people never knew about them at all, and the people who did know would rather move on to something else. But if you don't realize how many of these failures there are, and how much they cost, you can get a completely mistaken view of drug discovery. Sure, look at the fruit on the branches, on those rare occasions when some appears. But spare a glance at that expensive layer of leaves on the ground.

Comments (31) + TrackBacks (0) | Category: Clinical Trials | Diabetes and Obesity | Drug Development

June 19, 2013

The Drug Industry and the Obama Administration

Email This Entry

Posted by Derek

Over at Forbes, John Osborne adds some details to what has been apparent for some time now: the drug industry seems to have no particular friends inside the Obama administration:

Earlier this year I listened as a recently departed Obama administration official held forth on the industry and its rather desultory reputation. . .the substance of the remarks, and the apparent candor with which they were delivered, remain fresh in my mind, not least because of the important policy implications that the comments reflect.

. . .In part, there’s a lingering misimpression as to how new medicines are developed. While the NIH and its university research grantees make extraordinary discoveries, it is left to for-profit pharmaceutical and biotechnology companies to conduct the necessary large scale clinical studies and obtain regulatory approval prior to commercialization. Compare the respective annual spending totals: the NIH budget is around $30 billion, and the industry spends nearly double that amount. While the administration has great affection for universities, non-profit patient groups and government researchers (and it was admirably critical of the sequester’s meat cleaver impact on government sponsored research programs), it does not credit the essential role of industry in bringing discoveries from the bench to the bedside.

Terrific. I have to keep reminding myself how puzzled I was when I first came across the "NIH and universities discover all the drugs" mindset, but repeated exposures to it over the last few years have bred antibodies. If anyone from the administration would like to hear what someone who is not a lobbyist, not a CEO, not running for office, and has actually done this sort of work has to say about the topic, well, there are plenty of posts on this blog to refer to (and the comments sections to them are quite lively, too). In fact, I think I'll go ahead and link to a whole lineup of them - that way, when the topic comes up again, and it will, I can just send everyone here:

August 2012: A Quick Tour Through Drug Development Reality
May 2011: Maybe It Really Is That Hard?
March 2011: The NIH Goes For the Gusto
Feb 2011: The NIH's New Drug Discovery Center: Heading Into the Swamp?
Nov 2010: Where Drugs Come From: The Numbers
August 2009: Just Give It to NIH
August 2009: Wasted Money, Wasted Time?
July 2009: Where Drugs Come From, and How. Once More, With A Roll of the Eyes
May 2009: The NIH Takes the Plunge
Sep 2007: Drugs From Where?
November 2005: University of Drug Discovery?
October 2005: The Great Divide
September 2004: The NIH in the Clinic
September 2004: One More On Basic Research and the Clinic
September 2004: A Real-World Can O' Worms
September 2004: How Much Basic Research?
September 2004: How It Really Works

There we go - hours of reading, and all in the service of adding some reality to what is often a discussion full of unicorn burgers. Back to Osborne's piece, though - he goes on to make the point that one of the other sources of trouble with the administration is that the drug industry has continued to be profitable during the economic downturn, which apparently has engendered some suspicion.

And now for some 100-proof politics. The last of Osborne's contentions is that the administration (and many legislators as well) see the Medicare Part D prescription drug benefit as a huge windfall for the industry, and one that should be rolled back via a rebate program, setting prices back to what gets paid out under the Medicaid program instead. Ah, but opinions differ on this:

It’s useful to recall that former Louisiana Congressman and then PhRMA head Billy Tauzin negotiated with the White House in 2009 on behalf of the industry over this very question. Under the resulting deal, the industry agreed to support passage of the ACA and to make certain payments in the form of rebates and fees that amounted to approximately $80 billion over ten years; in exchange the administration agreed to resist those in Congress who pressed for more concessions from the drug companies or wanted to impose government price setting. . .

Tauzin's role, and the deal that he helped cut, have not been without controversy. I've always been worried about deals like this being subject to re-negotiations whenever it seems convenient, and those worries are not irrational, either:

. . .The White House believes that the industry would willingly (graciously? enthusiastically?) accept a new Part D outpatient drug rebate. Wow. The former official noted that the Simpson-Bowles deficit reduction panel recommended it, and its report was favorably endorsed by no less than House Speaker Boehner. Apparently, it is inconceivable to the White House that Boehner’s endorsement of the Simpson-Bowles platform would have occurred without the industry’s approval. Wow, again. That may be a perfectly logical assumption, but the other industry representatives within earshot never imagined that they had endorsed any such thing. No, it’s clear they have been under the (naïve) impression that the aforementioned $80 billion “contribution” was a very substantial sum in support of patients and the government treasury – and offered in a spirit of cooperation in recognition of the prospective benefits to industry of the expanded coverage that lies at the heart of Obamacare. With that said, the realization that this may be just the first of several installment payments left my colleagues in stunned silence; some mouths were visibly agape.

This topic came up late last year around here as well. And it'll come up again.

Comments (37) + TrackBacks (0) | Category: Academia (vs. Industry) | Current Events | Drug Development | Regulatory Affairs

June 18, 2013

Bernard Munos on The Last Twelve Years of Pharma

Email This Entry

Posted by Derek

Bernard Munos (ex-Lilly, now consulting) is out with a paper reviewing the approved drugs from 2000 to 2012. What's the current state of the industry? Is the upturn in drug approvals over the last two years real, or an artifact? And is it enough to keep things going?

Over that twelve-year span, the average drug approvals ran at 27 per year. Half of all the new drugs were in three therapeutic areas: cancer, infectious disease, and CNS. And as far as mechanisms go, there were about 190 different ones, by Munos' count. The most crowded category was (as might have been guessed) the 17 tyrosine kinase inhibitors, but 85% of the mechanisms were used by only one or two drugs, which is a long tail indeed.

Half those mechanisms were novel - that is, they were not represented by drugs approved before 2000. Coming up behind these first-in-class mechanisms were 29 follow-on drugs during this period, with an average gap of just under three years between the first and second drugs. What that tells you is that the follower programs were started at either about the same time as the first-in-class compounds (and had a slightly longer path through development), or were started at the first opportunity once the other program or mechanism became known. This means that they were started on very nearly the same risk basis as the original program: a three-year gap is not enough to validate much for a new mechanism, other than the fact that another organization thinks that it's worth working on, too. (Don't laugh at that one - there are research department that seem to live only for this validation, and regard their own first-in-class ideas with fear and suspicion).

Overall, though, Munos says that that fast-follower approach doesn't seem to be very effective, or not any more, given that few targets seem to be yielding more than one or two drugs. And as just mentioned, the narrow gap between first and second drugs also suggests that the risk-lowering effect of this strategy isn't very impressive, either.

Here's another interesting/worrisome point:

The long tail (of the mode-of-action curve). . . suggests that pharmaceutical innovation is a by-product of exploration, and not the result of pursuing a limited set of mechanisms, reflecting, for instance, a company’s marketing priorities. Put differently, there does not seem to be enough mechanisms able to yield multiple drugs, to support an industry. . .The last couple of years have seen an encouraging rise in new drug approvals, including many based on novel modes of action. However that surge has benefited companies unequally, with the top 12 pharmaceutical companies only garnering 25 out of 68 NMEs (37%). This is not enough to secure their future.

Looking at what many (most?) of the big companies are going through right now, it's hard to argue with that point of view. The word "secure" does not appear within any short character length of "future" when you look through the prospects for Lilly, AstraZeneca, and others.

Note also that part about how what a drug R&D operation finds isn't necessarily what it was looking for. That doesn't mesh well with some models of managment:

The drug hunter’s freedom to roam, and find innovative translational opportunities wherever they may lie is an essential part of success in drug research. This may help explain the disappointing performance of the programmatic approaches to drug R&D, that have swept much of the industry in the last 15 years. It has important managerial implications because, if innovation cannot be ordained, pharmaceutical companies need an adaptive – not directive – business model.

But if innovation cannot be ordained, why does a company need lots of people in high positions to ordain it, each with his or her own weekly meeting and online presentations database for all the PowerPoint slides? It's a head-scratcher of a problem, isn't it?

Comments (29) + TrackBacks (0) | Category: Drug Development | Drug Industry History

May 16, 2013

The Atlantic on Drug R&D

Email This Entry

Posted by Derek

"Can you respond to this tripe?" asked one of the emails that sent along this article in The Atlantic. I responded that I was planning to, but that things were made more complicated by my being extensively quoted in said tripe. Anyway, here goes.

The article, by Brian Till of the New America Foundation, seems somewhat confused, and is written in a confusing manner. The title is "How Drug Companies Keep Medicine Out of Reach", but the focus is on neglected tropical diseases, not all medicine. Well, the focus is actually on a contested WHO treaty. But the focus is also on the idea of using prizes to fund research, and on the patent system. And the focus is on the general idea of "delinking" R&D from sales in the drug business. Confocal prose not having been perfected yet, this makes the whole piece a difficult read, because no matter which of these ideas you're waiting to hear about, you end up having a long wait while you work your way through the other stuff. There are any number of sentences in this piece that reference "the idea" and its effects, but there is no sentence that begins with "Here's the idea"

I'll summarize: the WHO treaty in question is as yet formless. There is no defined treaty to be debated; one of the article's contentions is that the US has blocked things from even getting that far. But the general idea is that signatory states would commit to spending 0.01% of GDP on neglected diseases each year. Where this money goes is not clear. Grants to academia? Setting up new institutes? Incentives to commercial companies? And how the contributions from various countries are to be managed is not clear, either: should Angola (for example) pool its contributions with other countries (or send them somewhere else outright), or are they interested in setting up their own Angolan Institute of Tropical Disease Research?

The fuzziness continues. You will read and read through the article trying to figure out what happens next. The "delinking" idea comes in as a key part of the proposed treaty negotiations, with the reward for discovery of a tropical disease treatment coming from a prize for its development, rather than patent exclusivity. But where that money comes from (the GDP-linked contributions?) is unclear. Who sets the prize levels, at what point the money is awarded, who it goes to: hard to say.

And the "Who it goes to" question is a real one, because the article says that another part of the treaty would be a push for open-source discovery on these diseases (Matt Todd's malaria efforts at Sydney are cited). This, though, is to a great extent a whole different question than the source-of-funds one, or the how-the-prizes-work one. Collaboration on this scale is not easy to manage (although it might well be desirable) and it can end up replacing the inefficiencies of the marketplace with entirely new inefficiencies all its own. The research-prize idea seems to me to be a poor fit for the open-collaboration model, too: if you're putting up a prize, you're saying that competition between different groups will spur them on, which is why you're offering something of real value to whoever finishes first and/or best. But if it's a huge open-access collaboration, how do you split up the prize, exactly?

At some point, the article's discussion of delinking R&D and the problems with the current patent model spread fuzzily outside the bounds of tropical diseases (where there really is a market failure, I'd say) and start heading off into drug discovery in general. And that's where my quotes start showing up. The author did interview me by phone, and we had a good discussion. I'd like to think that I helped emphasize that when we in the drug business say that drug discovery is hard, that we're not just putting on a show for the crowd.

But there's an awful lot of "Gosh, it's so cheap to make these drugs, why are they so expensive?" in this piece. To be fair, Till does mention that drug discovery is an expensive and risky undertaking, but I'm not sure that someone reading the article will quite take on board how expensive and how risky it is, and what the implications are. There's also a lot of criticism of drug companies for pricing their products at "what the market will bear", rather than as some percentage of what it cost to discover or make them. This is a form of economics I've criticized many times here, and I won't go into all the arguments again - but I will ask:what other products are priced in such a manner? Other than what customers will pay for them? Implicit in these arguments is the idea that there's some sort of reasonable, gentlemanly profit that won't offend anyone's sensibilities, while grasping for more than that is just something that shouldn't be allowed. But just try to run an R&D-driven business on that concept. I mean, the article itself details the trouble that Eli Lilly, AstraZeneca, and others are facing with their patent expirations. What sort of trouble would they be in if they'd said "No, no, we shouldn't make such profits off our patented drugs. That would be indecent." Even with those massive profits, they're in trouble.

And that brings up another point: we also get the "Drug companies only spend X pennies per dollar on R&D". That's the usual response to pointing out situations like Lilly's; that they took the money and spent it on fleets of yachts or something. The figure given in the article is 16 cents per dollar of revenue, and it's prefaced by an "only". Only? Here, go look at different industries, around the world, and find one that spends more. By any industrial standard, we are plowing massive amounts back into the labs. I know that I complain about companies doing things like stock buybacks, but that's a complaint at the margin of what is already pretty impressive spending.

To finish up, here's one of the places I'm quoted in the article:

I asked Derek Lowe, the chemist and blogger, for his thoughts on the principle of delinking R&D from the actual manufacture of drugs, and why he thought the industry, facing such a daunting outlook, would reject an idea that could turn fallow fields of research on neglected diseases into profitable ones. "I really think it could be viable," he said. "I would like to see it given a real trial, and neglected diseases might be the place to do it. As it is, we really already kind of have a prize model in the developed countries, market exclusivity. But, at the same time, you could look at it and it will say, 'You will only make this amount of money and not one penny more by curing this tropical disease.' Their fear probably is that if that model works great, then we'll move on to all the other diseases."

What you're hearing is my attempt to bring in the real world. I think that prizes are, in fact, a very worthwhile thing to look into for market failures like tropical diseases. There are problems with the idea - for one thing, the prize payoff itself, compared with the time and opportunity cost, is hard to get right - but it's still definitely worth thinking about. But what I was trying to tell Brian Till was that drug companies would be worried (and rightly) about the extension of this model to all other disease areas. Wrapped up in the idea of a research-prize model is the assumption that someone (a wise committee somewhere) knows just what a particular research result is worth, and can set the payout (and afterwards, the price) accordingly. This is not true.

There's a follow-on effect. Such a wise committees might possibly feel a bit of political pressure to set those prices down to a level of nice and cheap, the better to make everyone happy. Drug discovery being what it is, it would take some years before all the gears ground to a halt, but I worry that something like this might be the real result. I find my libertarian impulses coming to the fore whenever I think about this situation, and that prompts me to break out an often-used quote from Robert Heinlein:

Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded — here and there, now and then — are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty.

This is known as "bad luck."

Comments (44) + TrackBacks (0) | Category: Drug Development | Drug Prices | Why Everyone Loves Us

April 2, 2013

Tecfidera's Price

Email This Entry

Posted by Derek

Let us take up the case of Tecfidera, the new Biogen/Idec drug for multiple sclerosis, known to us chemists as dimethyl fumarate. It joins the (not very long) list of industrial chemicals (the kind that can be purchased in railroad-car sizes) that are also approved pharmaceuticals for human use. The MS area has seen this before, interestingly.

A year's supply of Tecfidera will set you (or your insurance company) back $54,900. That's a bit higher than many analysts were anticipating, but that means "a bit higher over $50,000". The ceiling is about $60,000, which is what Novartis's Gilenya (fingolomod) goes for, and Biogen wanted to undercut them a bit. So, 55 long ones for a year's worth of dimethyl fumarate pills - what should one think about that?

Several thoughts come to mind, the first one being (probably) "Fifty thousand dollars for a bunch of dimethyl fumarate? Who's going to stand for that?" But we have an estimate for the second part of that question - Biogen thinks that quite a few people are going to stand for it, rather than stand for fingolomod. I'm sure they've devoted quite a bit of time and effort into thinking about that price, and that it's their best estimate of maximum profit. How, exactly, do they get away with that? Simple. They get away with it because they were willing to take the compound through clinical trials in MS patients, find out if it's tolerated and if it's efficacious, figure out the dosing regimen, and get it approved for this use by the FDA. If you or I had been willing to do that, and had been able to round up the money and resources, then we would also have the ability to charge fifty grand a year for it (or whatever we thought fit, actually).

What, exactly, gave them the idea that dimethyl fumarate might be good for multiple sclerosis? As it turns out, a German physician described its topical use for psoriasis back in 1959, and a formation of the compound as a cream (along with some monoesters) was eventually studied clinically by a small company in Switzerland called Fumapharm. This went on the market in Germany in the early 1990s, but the company did not have either the willingness or desire to extend their idea outside that region. But since dimethyl fumarate appears to work on psoriasis by modulating the immune system somehow, it did occur to someone that it might also be worth looking at in multiple sclerosis. Biogen began developing dimethyl fumarate for that purpose with Fumapharm, and eventually bought them outright in 2006 as things began to look more promising.

In other words, the connection of dimethyl fumarate as a possible therapy for MS had been out there, waiting to be made, since before many of us were born. Generations of drug developers had their chances to see it. Every company in the business had a chance to get interested in Fumapharm back in the late 80s and early 90s. But Biogen did, and in 2013 that move has paid off.

Now we come to two more questions, the first of which is "Should that move be paying off quite so lucratively?" But who gets to decide? Watching people pay fifty grand for a year's supply of dimethyl fumarate is not, on the face of it, a very appealing sight. At least, I don't find it so. But on the other hand, cost-of-goods is (for small molecules) generally not a very large part of the expense of a given pill - a rule of thumb is that such expenses should certainly be below 5% of a drug's selling price, and preferably less than 2%. It's just that it's even less in this case, and Biogen also has fewer worries about their supply chain, presumably. The fact this this drug is dimethyl fumarate is a curiosity (and perhaps an irritating one), but that lowers Biogen's costs by a couple of thousand a year per patient compared to some other small molecule. The rest of the cost of Tecfidera has nothing to do with what the ingredients are - it's all about what Biogen had to pay to get it on the market, and (most importantly) what the market will bear. If insurance companies believe that paying fifty thousand a year for the drug is a worthwhile expense, the Biogen will agree with them, too.

The second question is divorced from words like "should", and moves to the practical question of "can". The topical fumarate drug in Europe apparently had fairly wide "homebrew" use among psoriasis patients in other countries, and one has to wonder just a bit about that happening with Tacfidera. Biogen Idec certainly has method-of-use patents, but not composition-of-matter, so it's going to be up to them to try to police this. I found the Makena situation more irritating than this one (and the colchicine one, too), because in those cases, the exact drugs for the exact indications had already been on the market. (Dimethyl fumarate was not a drug for MS until Biogen proved it so, by contrast). But KV Pharmaceuticals had to go after people who were compounding the drug, anyway, and I have to wonder if a secondary market in dimethyl fumarate might develop. I don't know the details of its formulation (and I'm sure that Biogen will make much of it being something that can't be replicated in a basement), but there will surely be people who try it.

Comments (58) + TrackBacks (0) | Category: Drug Development | Drug Prices | The Central Nervous System

March 29, 2013

Sirtuins Live On at GSK

Email This Entry

Posted by Derek

Well, GSK is shutting down the Sirtris operation in Cambridge, but sirtuins apparently live on. I'm told that the company is advertising for chemists and biologists to come to Pennsylvania to staff the effort, and in this market, they'll have plenty of takers. We'll have the sirtuin drug development saga with us for a while yet. And I'm glad, actually, and no, not just because it gives me something to write about. I'd like to know what sirtuins actually are capable of doing in humans, and I'd like to see a drug or two come out of this. What the odds of that are, though, I couldn't say. . .

Comments (18) + TrackBacks (0) | Category: Drug Development

March 27, 2013

A Therapy Named After You?

Email This Entry

Posted by Derek

Back last fall I wrote about Prof. Magnus Essand and his oncoloytic virus research. He's gotten a good amount of press coverage, and has been trying all sorts of approaches to get further work funded. But here's one that I hadn't thought of: Essand and his co-workers are willing to name the therapy after anyone who can pony up the money to get it into a 20-patient human trial.

The more I think about that, the less problem I have with it. This looks at first like a pure angel investor move, and if people want to take a crack at something like this with their own cash, let them do the due diligence and make the call. Actually, Essand believes that his current virus is unpatentable (due to prior publication), so this is less of an a angel investment and more sheer philanthropy. But I have no objections at all to that, either.

Update: here's more on the story.

Comments (12) + TrackBacks (0) | Category: Cancer | Clinical Trials | Drug Development

The DNA-Encoded Library Platform Yields A Hit

Email This Entry

Posted by Derek

I wrote here about DNA-barcoding of huge (massively, crazily huge) combichem libraries, a technology that apparently works, although one can think of a lot of reasons why it shouldn't. This is something that GlaxoSmithKline bought by acquiring Praecis some years ago, and there are others working in the same space.

For outsiders, the question has long been "What's come out of this work?" And there is now at least one answer, published in a place where one might not notice it: this paper in Prostaglandins and Other Lipid Mediators. It's not a journal whose contents I regularly scan. But this is a paper from GSK on a soluble epoxide hydrolase inhibitor, and therein one finds:

sEH inhibitors were identified by screening large libraries of drug-like molecules, each attached to a DNA “bar code”, utilizing DNA-encoded library technology [10] developed by Praecis Pharmaceuticals, now part of GlaxoSmithKline. The initial hits were then synthesized off of DNA, and hit-to-lead chemistry was carried out to identify key features of the sEH pharmacophore. The lead series were then optimized for potency at the target, selectivity and developability parameters such as aqueous solubility and oral bioavailability, resulting in GSK2256294A. . .

That's the sum of the med-chem in the article, which certainly compresses things, and I hope that we see a more complete writeup at some point from a chemistry perspective. Looking at the structure, though, this is a triaminotriazine-derived compound (as in the earlier work linked to in the first paragraph), so yes, you apparently can get interesting leads that way. How different this compound is from the screening hit is a good question, but it's noteworthy that a diaminotriazine's worth of its heritage is still present. Perhaps we'll eventually see the results of the later-generation chemistry (non-triazine).

Comments (12) + TrackBacks (0) | Category: Chemical Biology | Chemical News | Drug Assays | Drug Development

The NIH, Pfizer, and Senator Wyden

Email This Entry

Posted by Derek

Senator Ron Wyden (D-Oregon) seems to be the latest champion of the "NIH discovers drugs and Pharma rips them off" viewpoint. Here's a post from John LaMattina on Wyden's recent letter to Francis Collins. The proximate cause of all this seems to be the Pfizer JAK3 inhibitor:

Tofacitinib (Xeljanz), approved last November by the U.S. Food and Drug Administration, is nearing the market as the first oral medication for the treatment of rheumatoid arthritis. Given that the research base provided by the National Institutes of Health (NIH) culminated in the approval of Xeljanz, citizens have the right to be concerned about the determination of its price and what return on investment they can expect. While it is correct that the expenses of drug discovery and preclinical and clinical development were fully undertaken by Pfizer, taxpayer-funded research was foundational to the development of Xeljanz.

I think that this is likely another case where people don't quite realize the steepness of the climb between "X looks like a great disease target" and "We now have an FDA-approved drug targeting X". Here's more from Wyden's letter:

Developing drugs in America remains a challenging business, and NIH plays a critically important role by doing research that might not otherwise get done by the private sector. My bottom line: When taxpayer-funded research is commercialized, the public deserves a real return on its investment. With the price of Xeljanz estimated at about $25,000 a year and annual sales projected by some industry experts as high as $2.5 billion, it is important to consider whether the public investment has assured accessibility and affordability.

This is going to come across as nastier than I intend it to, but my first response is that the taxpayer's return on this was that they got a new drug where there wasn't one before. And via the NIH-funded discoveries, the taxpayers stimulated Pfizer (and many other companies) to spend huge amounts of money and effort to turn the original discoveries in the JAK field into real therapies. I value knowledge greatly, but no human suffering whatsoever was relieved by the knowledge alone that JAK3 appeared to play a role in inflammation. What was there was the potential to affect the lives of patients, and that potential was realized by Pfizer spending its own money.

And not just Pfizer. Let's not forget that the NIH entered into research agreements with many other companies, and that the list of JAK3-related drug discovery projects is a long one. And keep in mind that not all of them, by any means, have ever earned a nickel for the companies involved, and that many of them never will. As for Pfizer, Xeljanz has been on the market for less than six months, so it's too early to say how the drug will do. But it's not a license to print money, and is in a large, extremely competitive market. And should it run into trouble (which I certainly hope doesn't happen), I doubt if Senator Wyden will be writing letters seeking to share some of the expenses.

Comments (35) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Drug Prices | Regulatory Affairs

March 26, 2013

Automated Med-Chem, At Last?

Email This Entry

Posted by Derek

I've written several times about flow chemistry here, and a new paper in J. Med. Chem. prompts me to return to the subject. This, though, is the next stage in flow chemistry - more like flow med-chem:

Here, we report the application of a flow technology platform integrating the key elements of structure–activity relationship (SAR) generation to the discovery of novel Abl kinase inhibitors. The platform utilizes flow chemistry for rapid in-line synthesis, automated purification, and analysis coupled with bioassay. The combination of activity prediction using Random-Forest regression with chemical space sampling algorithms allows the construction of an activity model that refines itself after every iteration of synthesis and biological result.

Now, this is the point at which people start to get either excited or fearful. (I sometimes have trouble telling the difference, myself). We're talking about the entire early-stage optimization cycle here, and the vision is of someone topping up a bunch of solvent reservoirs, hitting a button, and leaving for the weekend in the expectation of finding a nanomolar compound waiting on Monday. I'll bet you could sell that to AstraZeneca for some serious cash, and to be fair, they're not the only ones who would bite, given a sufficiently impressive demo and slide deck.

But how close to this Lab of the Future does this work get? Digging into the paper, we have this:

Initially, this approach mirrors that of a traditional hit-to-lead program, namely, hit generation activities via, for example, high-throughput screening (HTS), other screening approaches, or prior art review. From this, the virtual chemical space of target molecules is constructed that defines the boundaries of an SAR heat map. An initial activity model is then built using data available from a screening campaign or the literature against the defined biological target. This model is used to decide which analogue is made during each iteration of synthesis and testing, and the model is updated after each individual compound assay to incorporate the new data. Typically the coupled design, synthesis, and assay times are 1–2 h per iteration.

Among the key things that already have to be in place, though, are reliable chemistry (fit to generate a wide range of structures) and some clue about where to start. Those are not givens, but they're certainly not impossible barriers, either. In this case, the team (three UK groups) is looking for BCL-Abl inhibitors, a perfectly reasonable test bed. A look through the literature suggested coupling hinge-binding motifs to DFG-loop binders through an acetylene linker, as in Ariad's ponatinib. This, while not a strategy that will earn you a big raise, is not one that's going to get you fired, either. Virtual screening around the structure, followed by eyeballing by real humans, narrowed down some possibilities for new structures. Further possibilities were suggested by looking at PDB structures of homologous binding sites and seeing what sorts of things bound to them.

So already, what we're looking at is less Automatic Lead Discovery than Automatic Patent Busting. But there's a place for that, too. Ten DFG pieces were synthesized, in Sonogashira-couplable form, and 27 hinge-binding motifs with alkynes on them were readied on the other end. Then they pressed the button and went home for the weekend. Well, not quite. They set things up to try two different optimization routines, once the compounds were synthesized, run through a column, and through the assay (all in flow). One will be familiar to anyone who's been in the drug industry for more than about five minutes, because it's called "Chase Potency". The other one, "Most Active Under Sampled", tries to even out the distributions of reactants by favoring the ones that haven't been used as often. (These strategies can also be mixed). In each case, the model was seeded with binding constants of literature structures, to get things going.

The first run, which took about 30 hours, used the "Under Sampled" algorithm to spit out 22 new compounds (there were six chemistry failures) and a corresponding SAR heat map. Another run was done with "Chase Potency" in place, generating 14 more compounds. That was followed by a combined-strategy run, which cranked out 28 more compounds (with 13 failures in synthesis). Overall, there were 90 loops through the process, producing 64 new products. The best of these were nanomolar or below.

But shouldn't they have been? The deck already has to be stacked to some degree for this technique to work at all in the present stage of development. Getting potent inhibitors from these sorts of starting points isn't impressive by itself. I think the main advantage to this is the time needed to generated the compound and the assay data. Having the synthesis, purification, and assay platform all right next to each other, with compound being pumped right from one to the other, is a much tighter loop than the usual drug discovery organization runs. The usual, if you haven't experienced it, is more like "Run the reaction. Work up the reaction. Run it through a column (or have the purification group run it through a column for you). Get your fractions. Evaporate them. Check the compound by LC/MS and NMR. Code it into the system and get it into a vial. Send it over to the assay folks for the weekly run. Wait a couple of days for the batch of data to be processed. Repeat."

The science-fictional extension of this is when we move to a wider variety of possible chemistries, and perhaps incorporate the modeling/docking into the loop as well, when it's trustworthy enough to do so. Now that would be something to see. You come back in a few days and find that the machine has unexpectedly veered off into photochemical 2+2 additions with a range of alkenes, because the Chase Potency module couldn't pass up a great cyclobutane hit that the modeling software predicted. And all while you were doing something else. And that something else, by this point, is. . .what, exactly? Food for thought.

Comments (16) + TrackBacks (0) | Category: Chemical News | Drug Assays | Drug Development

March 21, 2013

AstraZeneca Makes a Deal With Moderna. Wait, Who?

Email This Entry

Posted by Derek

AstraZeneca has announced another 2300 job cuts, this time in sales and administration. That's not too much of a surprise, as the cuts announced recently in R&D make it clear that the company is determined to get smaller. But their overall R&D strategy is still unclear, other than "We can't go on like this", which is clear enough.

One interesting item has just come out, though. The company has done a deal with Moderna Therapeutics of Cambridge (US), a relatively new outfit that's trying something that (as far as I know) no one else has had the nerve to. Moderna is trying to use messenger RNAs as therapies, to stimulate the body's own cells to produce more of some desired protein product. This is the flip side of antisense and RNA interference, where you throw a wrench into the transcription/translation machinery to cut down on some protein. Moderna's trying to make the wheels spin in the other direction.

This is the sort of idea that makes me feel as if there are two people inhabiting my head. One side of me is very excited and interested to see if this approach will work, and the other side is very glad that I'm not one of the people being asked to do it. I've always thought that messing up or blocking some process was an easier task than making it do the right thing (only more so), and in this case, we haven't even reliably shown that blocking such RNA pathways is a good way to a therapy.

I also wonder about the disease areas that such a therapy would treat, and how amenable they are to the approach. The first one that occurs to a person is "Allow Type I diabetics to produce their own insulin", but if your islet cells have been disrupted or killed off, how is that going to work? Will other cell types recognize the mRNA-type molecules you're giving, and make some insulin themselves? If they do, what sort of physiological control will they be under? Beta-cells, after all, are involved in a lot of complicated signaling to tell them when to make insulin and when to lay off. I can also imagine this technique being used for a number of genetic disorders, where we know what the defective protein is and what it's supposed to be. But again, how does the mRNA get to the right tissues at the right time? Protein expression is under so many constraints and controls that it seems almost foolhardy to think that you could step in, dump some mRNA on the process, and get things to work the way that you want them to.

But all that said, there's no substitute for trying it out. And the people behind Moderna are not fools, either, so you can be sure that these questions (and many more) have crossed their minds already. (The company's press materials claim that they've addressed the cellular-specificity problem, for example). They've gotten a very favorable deal from AstraZeneca - admittedly a rather desperate company - but good enough that they must have a rather convincing story to tell with their internal data. This is the very picture of a high-risk, high-reward approach, and I wish them success with it. A lot of people will be watching very closely.

Comments (37) + TrackBacks (0) | Category: Biological News | Business and Markets | Drug Development

March 19, 2013

Affymax In Trouble

Email This Entry

Posted by Derek

Affymax has had a long history, and it's rarely been dull. The company was founded in 1988, back in the very earliest flush of the Combichem era, and in its early years it (along with Pharmacopeia) was what people thought of when they thought of that whole approach. Huge compound libraries produced (as much as possible) by robotics, equally huge screening efforts to deal with all those compounds - this stuff is familiar to us now (all too familiar, in many cases), but it was new then. If you weren't around for it, you'll have to take the word of those who were that it could all be rather exciting and scary at first: what if the answer really was to crank out huge piles of amides, sulfonamides, substituted piperazines, aminotriazines, oligopeptides, and all the other "build-that-compound-count-now!" classes? No one could say for sure that it wasn't. Not yet.

Glaxo bought Affymax back in 1995, about the time they were buying Wellcome, which makes it seem like a long time ago, and perhaps it was. At any rate, they kept the combichem/screening technology and spun a new version of Affymax back out in 2001 to a syndicate of investors. For the past twelve years, that Affymax has been in the drug discovery and development business on its own.

And as this page shows, the story through most of those years has been peginesatide (brand name Omontys, although it was known as Hematide for a while as well). This is synthetic peptide (with some unnatural amino acids in it, and a polyethylene glycol tail) that mimics erythropoetin. What with its cyclic nature (a couple of disulfide bonds), the unnatural residues, and the PEGylation, it's a perfect example of what you often have to do to make an oligopeptide into a drug.

But for quite a while there, no one was sure whether this one was going to be a drug or not. Affymax had partnered with Takeda along the way, and in 2010 the companies announced some disturbing clinical data in kidney patients. While Omontys did seem to help with anemia, it also seemed to have a worse safety profile than Amgen's EPO, the existing competition. The big worry was cardiovascular trouble (which had also been a problem with EPO itself and all the other attempted competition in that field). A period of wranging ensued, with a lot of work on the clinical data and a lot of back-and-forthing with the FDA. In the end, the drug was actually approved one year ago, albeit with a black-box warning about cardiovascular safety.

But over the last year, about 25,000 patients got the drug, and unfortunately, 19 of them had serious anaphylactic reactions to it within the first half hour of exposure. Three patients died as a result, and some others nearly did. That is also exactly what one worries about with a synthetic peptide derivative: it's close enough to the real protein to do its job, but it's different enough to set off the occasional immune response, and the immune system can be very serious business indeed. Allergic responses had been noted in the clinical trials, but I think that if you'd taken bets last March, people would have picked the cardiovascular effects as the likely nemesis, not anaphylaxis. But that's not how it's worked out.

Takeda and Affymax voluntarily recalled the drug last month. And that looked like it might be all for the company, because this has been their main chance for some years now. Sure enough, the announcement has come that most of the employees are being let go. And it includes this language, which is the financial correlate of Cheyne-Stokes breathing:

The company also announced that it will retain a bank to evaluate strategic alternatives for the organization, including the sale of the company or its assets, or a corporate merger. The company is considering all possible alternatives, including further restructuring activities, wind-down of operations or even bankruptcy proceedings.

I'm sorry to hear it. Drug development is very hard indeed.

Comments (11) + TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Drug Development | Drug Industry History | Toxicology

March 18, 2013

GlaxoSmithKline's CEO on the Price of New Drugs

Email This Entry

Posted by Derek

Well, GlaxoSmithKline CEO Andrew Witty has made things interesting. Here he is at a recent conference in London when the topic of drug pricing came up:

. . . Witty said the $1 billion price tag was "one of the great myths of the industry", since it was an average figure that includes money spent on drugs that ultimately fail.

In the case of GSK, a major revamp in the way research is conducted means the rate of return on R&D investment has increased by about 30 percent in the past three or four years because fewer drugs have flopped in late-stage testing, he said.

"If you stop failing so often you massively reduce the cost of drug development ... it's why we are beginning to be able to price lower," Witty said.

"It's entirely achievable that we can improve the efficiency of the industry and pass that forward in terms of reduced prices."

I have a feeling that I'm going to be hearing "great myths of the industry" in my email for some time, thanks to this speech, so I'd like to thank Andrew Witty for that. But here's what he's trying to get across: if you start research on for a new drug, name a clinical candidate, take it to human trials and are lucky enough to have it work, then get it approved by the FDA, you will not have spent one billion dollars to get there. That, though, is the figure for a single run-through when everything works. If, on the other hand, you are actually running a drug company, with many compounds in development, and after a decade or so you total up all the money you've spent, versus the number of drugs you got onto the market, well, then you may well average a billion dollars per drug. That's because so many of them wipe out in the clinic; the money gets spent and you get no return at all.

That's the analysis that Matthew Herper did here (blogged about here), and that same Reuters article makes reference to a similar study done by Deloitte (and Thomson Reuters!) that found that the average cost of a new drug is indeed about $1.1 billion when you have to pay for the failures.

And believe me, we have to pay for them. A lottery ticket may only cost a dollar, but by the time you've won a million dollars playing the lottery, you will have bought a lot of losing tickets. In fact, you'll have bought far more than a million dollar's worth, or no state would run a lottery, but that's a negative-expectations game, while drug research (like any business) is supposed to be positive-expectations. Is it? Just barely, according to that same Deloitte study:

In effect, the industry is treading water in the fight to deliver better returns on the billions of dollars ploughed into the hunt for new drugs each year.

With an average internal rate of return (IRR) from R&D in 2012 of 7.2 percent - against 7.7 percent and 10.5 percent in the two preceding years - Big Pharma is barely covering its average cost of capital, estimated at around 7 percent.

Keep that in mind next time you hear about how wonderfully profitable the drug business is. And those are still better numbers than Morgan Stanley had a couple of years before, when they estimated that our internal returns probably weren't keeping up with our cost of capital at all. (Mind you, it seems that their analysis may have been a bit off, since they used their figures to recommend an "Overweight" on AstraZeneca shares, a decision that looked smart for a few months, but one that a person by now would have regretted deeply).

But back to Andrew Witty. What he's trying to say is that it doesn't have to cost a billion dollars per drug, if you don't fail so often, and he's claiming that GSK is starting to fail less often. True, or not? The people I know at the company aren't exactly breaking out the party hats, for what that's worth, and it looks like the company's might have to add the entire Sirtris investment to the "sunk cost" pile. Overall, I think it's too soon to call any corners as having been turned, even if GSK does turn out to have been doing better. Companies can have runs of good fortune and bad, and the history of the industry is absolutely littered with the press releases of companies who say that they've Turned A New Page of Success and will now be cranking out the wonder drugs like nobody's business. If they keep it up, GSK will have plenty of chances to tell us all about it.

Now, one last topic. What about Witty's statement that this new trend to success will allow drug prices themselves to come down? That's worth thinking about all by itself, on several levels - here are my thoughts, in no particular order:

(1) To a first approximation, that's true. If you're selling widgets, your costs go down, you can cut prices, and you can presumably sell more widgets. But as mentioned above, I'm not yet convinced that GSK's costs are truly coming down yet. And see point three below, because GSK and the rest of us in this business are not, in fact, selling widgets.

(2) Even if costs are coming down, counterbalancing that are several other long-term trends, such as the low-hanging fruit problem. As we move into harder and harder sorts of targets and disease areas, I would assume that the success rate of drugs in the clinic will be hard pressed to improve. This is partly a portfolio management problem, and can be ameliorated and hedged against to some degree, but it is, I think, a long-term concern, unless we start to make some intellectual headway on these topics, and speed the day. On the other side of this balance are the various efforts to rationalize clinical trials and so on.

(3) A larger factor is that the market for innovative drugs is not very sensitive to price. This is a vast topic, covered at vast length in many places, but it comes down to there being (relatively) few entrants in any new therapeutic space, and to people, and governments, and insurance companies, being willing to spend relatively high amounts of money for human health. (The addition of governments into that list means also that various price-fixing schemes distort the market in all kinds of interesting ways as well). At any rate, price mechanisms don't work like classical econ-textbook widgets in the drug business.

So I'm not sure, really, how this will play out. GSK has only modest incentives to lower the prices of its drugs. Such a move won't, in many markets, allow them to sell more drugs to make up the difference on volume. And actually, the company will probably be able to offset some of the loss via the political capital that comes from talking about any such price changes. We might be seeing just that effect with Witty's speech.

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Prices

March 14, 2013

Does Baldness Get More Funding Than Malaria?

Email This Entry

Posted by Derek

OK, let's fact-check Bill Gates today, shall we?

Capitalism means that there is much more research into male baldness than there is into diseases such as malaria, which mostly affect poor people, said Bill Gates, speaking at the Royal Academy of Engineering's Global Grand Challenges Summit.

"Our priorities are tilted by marketplace imperatives," he said. "The malaria vaccine in humanist terms is the biggest need. But it gets virtually no funding. But if you are working on male baldness or other things you get an order of magnitude more research funding because of the voice in the marketplace than something like malaria."

Gates' larger point, that tropical diseases are an example of market failure, stands. But I don't think this example does. I have never yet worked on any project in industry that had anything to do with baldness, while I have actually touched on malaria. Looking around the scientific literature, I see many more publications on potential malaria drugs than I see potential baldness drugs (in fact, I'm not sure if I've ever seen anything on the latter, after minoxidil - and its hair-growth effects were discovered by accident during a cardiovascular program). Maybe I'm reading the wrong journals.

But then, Gates also seems to buy into the critical-shortage-of-STEM idea:

With regards to encouraging more students into STEM education, Gates said: "It's kind of surprising that we have such a deficit of people going into those fields. Look at where you can have the most interesting job that pays well and will have impact on society -- all three of those things line up to say science and engineering and yet in most rich countries we see decline. Asia is an exception."

The problem is, there aren't as many of these interesting, well-paying jobs around as there used to be. Any discussion of the STEM education issue that doesn't deal with that angle is (to say the least) incomplete.

Comments (28) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Infectious Diseases

March 13, 2013

Who to Manufacture an API?

Email This Entry

Posted by Derek

Here's a very practical question indeed, sent in by a reader:

After a few weeks of trying to run down a possible API manufacturer for our molecule, I am stuck. We have a straightforward proven synthesis of a 300 weight lipid and need only 10 kg for our trials. Any readers have suggestions? Alternately, we will do it ourselves and find someone to help us with the documentation. Suggestions that way, too?

That's worth asking: who's your go-to for things like this, a reliable contract supplier for high-quality material with all the documentation? I'll say up front that I don't know who's been contacted already, or why the search has been as difficult as it has, but I'll see if I can get more details. Suggestions welcome in the comments. . .

Update: this post has generated a lot of very sound advice. Anyone who's approaching this stage for the first time (as my correspondent clearly is) is looking at a significant expenditure for something that could make or break a small research effort. I'm putting this note up for people who find this post in future searches - read the comments; you'll be glad you did.

Comments (67) + TrackBacks (0) | Category: Drug Development

Getting Down to Protein-Protein Compounds

Email This Entry

Posted by Derek

Late last year I wrote about a paper that suggested that some "stapled peptides" might not work as well as advertised. I've been meaning to link to this C&E News article on the whole controversy - it's a fine overview of the area.

And that also gives me a chance to mention this review in Nature Chemistry (free full access). It's an excellent look at the entire topic of going after alpha-helix protein-protein interactions with small molecules. Articles like this really give you an appreciation for a good literature review - this information is scattered across the literature, and the authors here (from Leeds) have really done everyone interested in this topic a favor by collecting all of it and putting it into context.

As they say, you really have two choices if you're going after this sort of protein-protein interaction (well, three, if you count chucking the whole business and going to truck-driving school, but that option is not specific to this field). You can make something that's helical itself, so as to present the side chains in what you hope will be the correct orientation, or you can go after some completely different structure that just happens to arrange these groups into the right spots (but has no helical architecture itself).

Neither of these is going to lead to attractive molecules. The authors address this problem near the end of the paper, saying that we may be facing a choice here: make potent inhibitors of protein-protein interactions, or stay within Lipinski-guideline property space. Doing both at the same time just may not be possible. On the evidence so far, I think they're right. How we're going to get such things into cells, though, is a real problem (note this entry last fall on macrocyclic compounds, where the same concern naturally comes up). Since we don't seem to know much about why some compounds make it into cells and some don't, perhaps the way forward (for now) is to find a platform where as many big PPI candidates as possible can be evaluated quickly for activity (both in the relevant protein assay and then in cells). If we can't be smart enough, or not yet, maybe we can go after the problem with brute force.

With enough examples of success, we might be able to get a handle on what's happening. This means, though, that we'll have to generate a lot of complex structures quickly and in great variety, and if that's not a synthetic organic chemistry problem, I'd like to know what is. This is another example of a theme I come back to - that there are many issues in drug discovery that can only be answered by cutting-edge organic chemistry. We should be attacking these and making a case for how valuable the chemical component is, rather than letting ourselves be pigeonholed as a bunch of folks who run Suzuki couplings all day long and who might as well be outsourced to Fiji.

Comments (10) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics

February 8, 2013

All Those Drug-Likeness Papers: A Bit Too Neat to be True?

Email This Entry

Posted by Derek

There's a fascinating paper out on the concept of "drug-likeness" that I think every medicinal chemist should have a look at. It would be hard to count the number of publications on this topic over the last ten years or so, but what if we've been kidding ourselves about some of the main points?

The big concept in this area is, of course, Lipinski criteria, or Rule of Five. Here's what the authors, Peter Kenny and Carlos Montanari of the University of São Paulo, have to say:

No discussion of drug-likeness would be complete without reference to the influential Rule of 5 (Ro5) which is essentially a statement of property distributions for compounds taken into Phase II clinical trials. The focus of Ro5 is oral absorption and the rule neither quantifies the risks of failure associated with non-compliance nor provides guidance as to how sub-optimal characteristics of compliant compounds might be improved. It also raises a number of questions. What is the physicochemical basis of Ro50s asymmetry with respect to hydrogen bond donors and acceptors? Why is calculated octanol/water partition coefficient (ClogP) used to specify Ro50s low polarity limit when the high polarity cut off is defined in terms of numbers of hydrogen bond donors and acceptors? It is possible that these characteristics reflect the relative inability of the octanol/water partitioning system to ‘see’ donors (Fig. 1) and the likelihood that acceptors (especially as defined for Ro5) are more common than donors in pharmaceutically-relevant compounds. The importance of Ro5 is that it raised awareness across the pharmaceutical industry about the relevance of physico- chemical properties. The wide acceptance of Ro5 provided other researchers with an incentive to publish analyses of their own data and those who have followed the drug discovery literature over the last decade or so will have become aware of a publication genre that can be described as ‘retrospective data analysis of large proprietary data sets’ or, more succinctly, as ‘Ro5 envy’.

There, fellow med-chemists, doesn't this already sound like something you want to read? Thought so. Here, have some more:

Despite widespread belief that control of fundamental physicochemical properties is important in pharmaceutical design, the correlations between these and ADMET properties may not actually be as strong as is often assumed. The mere existence of a trend is of no interest in drug discovery and strengths of trends must be known if decisions are to be accurately described as data-driven. Although data analysts frequently tout the statistical significance of the trends that their analysis has revealed, weak trends can be statistically significant without being remotely interesting. We might be confident that the coin that lands heads up for 51 % of a billion throws is biased but this knowledge provides little comfort for the person charged with predicting the result of the next throw. Weak trends can be beaten and when powered by enough data, even the feeblest of trends acquires statistical significance.

So, where are the authors going with all this entertaining invective? (Not that there's anything wrong with that; I'm the last person to complain). They're worried that the transformations that primary drug property data have undergone in the literature have tended to exaggerate the correlations between these properties and the endpoints that we care about. The end result is pernicious:

Correlation inflation becomes an issue when the results of data analysis are used to make real decisions. To restrict values of properties such as lipophilicity more stringently than is justified by trends in the data is to deny one’s own drug-hunting teams room to maneuver while yielding the initiative to hungrier, more agile competitors.

They illustrate this by reference to synthetic data sets, showing how one can get rather different impressions depending on how the numbers are handled along the way. Representing sets of empirical points by using their average values, for example, can cause the final correlations to appear more robust than they really are. That, the authors say, is just what happened in this study from 2006 ("Can we rationally design promiscuous drugs?) and in this one from 2007 ("The influence of drug-like concepts on decision-making in medicinal chemistry"). The complaint is that showing a correlation between cLogP and median compound promiscuity does not imply that there is one between cLogP and compound promiscuity per se. And the authors note that the two papers manage to come to opposite conclusions about the effect of molecular weight, which does make one wonder. The "Escape from flatland" paper from 2009 and the "ADMET rules of thumb" paper from 2008 (mentioned here) also come in for criticism on this point - binning averaged data from a large continuous set and then treated those as real objects for statistic analysis. Ones conclusions depend strongly on how many bins one uses. Here's a specific take on that last paper:

The end point of the G2008 analysis is ‘‘a set of simple interpretable ADMET rules of thumb’’ and it is instructive to examine these more closely. Two classifications (ClogP<4 and MW<400 Da; ClogP>4 or MW>400 Da) were created and these were combined with the four ionization state classifications to define eight classes of compound. Each combination of ADMET property and compound class was labeled according to whether the mean value of the ADMET property was lower than, higher than or not significantly different from the average for all compounds. Although the rules of thumb are indeed simple, it is not clear how useful they are in drug discovery. Firstly, the rules only say whether or not differences are significant and not how large they are. Secondly, the rules are irrelevant if the compounds of interest are all in the same class. Thirdly, the rules predict abrupt changes in ADMET properties going from one class to another. For example, the rules predict significantly different aqueous solubility for two neutral compounds with MW of 399 and 401 Da, provided that their ClogP values do not exceed 4. It is instructive to consider how the rules might have differed had values of logP and MW of 5 and 500 Da (or 3 and 300 Da) had been used to define them instead of 4 and 400 Da.

These problems also occur in graphical representations of all these data, as you'd imagine, and the authors show several of these that they object to. A particular example is this paper from 2010 ("Getting physical in drug discovery"). Three data sets, whose correlations in their primary data do not vary significantly, generate very different looking bar charts. And that leads to this comment:

Both the MR2009 and HY2010 studies note the simplicity of the relationships that the analysis has revealed. Given that drug discovery would appear to be anything but simple, the simplicity of a drug-likeness model could actually be taken as evidence for its irrelevance to drug discovery. The number of aromatic rings in a molecule can be reduced by eliminating rings or by eliminating aromaticity and the two cases appear to be treated as equivalent in both the MR2009 and HY2010 studies. Using the mnemonic suggested in MR2009 one might expect to make a compound more developable by replacing a benzene ring with cyclohexadiene or benzoquinone.

The authors wind up by emphasizing that they're not saying that things like lipophilicity, aromaticity, molecular weight and so on are unimportant - far from it. What they're saying, though, is that we need to be aware of how strong these correlations really are so that we don't fool ourselves into thinking that we're addressing our problems, when we really aren't. We might want to stop looking for huge, universally applicable sets of rules and take what we can get in smaller, local data sets within a given series of compounds. The paper ends with a set of recommendations for authors and editors - among them, always making primary data sets part of the supplementary material, not relying on purely graphical representations to make statistical points, and a number of more stringent criteria for evaluating data that have been partitioned into bins. They say that they hope that their paper "stimulates debate", and I think it should do just that. It's certainly given me a lot of things to think about!

Comments (13) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | The Scientific Literature

February 7, 2013

Addex Cuts Back: An Old Story, Told Again

Email This Entry

Posted by Derek

Addex Therapeutics has been trying to develop allosteric modulators as drugs. That's a worthy goal (albeit a tough one) - "allosteric" is a term that covers an awful lot of ground. The basic definition is a site that affects the activity of its protein, but is separate from the active or ligand-binding site itself. All sorts of regulatory sites, cofactors, protein-protein interaction motifs, and who knows what else can fit into that definition. It's safe to say that allosteric mechanisms account for a significant number of head-scratching assay results, but unraveling them can be quite a challenge.

It's proving to be one for Addex. They've announced that they're going to focus on a few clinical programs, targeting orphan diseases in the major markets, and to do that, well. . .:

In executing this strategy and to maximize potential clinical success in at least two programs over the next 12 months, the company will reduce its overall cost structure, particularly around its early-stage discovery efforts, while maintaining its core competency and expertise in allosteric modulation. The result will be a development-focused company with a year cash runway. In addition, the company will seek to increase its cash position through non-dilutive partnerships by monetizing its platform capability as well as current discovery programs via licensing and strategic transactions.

That is the sound of the hatches being battened down. And that noise can be heard pretty often in the small-company part of the drug business. Too often, it comes down to "We can advance this compound in the clinic, enough to try to get more money from someone, or we can continue to do discovery research. But not both. Not now." Some companies have gone through this cycles several times, laying off scientists and then eventually hiring people back (sometimes some of the same people) when the money starts flowing again. But in the majority of these cases, I'd say that this turns out to be the beginning of the end. The failure rates in the clinic see to that - if you have to have your compounds work there, the very next ones you have, the only things you have on hand in order to survive, then the odds are not with you.

But that's what every small biopharma company faces: something has to work, or the money will run out. A lot of the managing of such an outfit consists of working out strategies to keep things going long enough. You can start from a better position than usual, if that's an option. You can pursue deals with larger companies early on, if you actually have something that someone might want (but you won't get as good a deal as you would have later, if what you're partnering actually works out). You can beat all sorts of bushes to raise cash, and try all sorts of techniques to keep it from being spent so quickly, or on the wrong things (as much as you can tell what those are).

But eventually, something has to work, or the music stops. Ditching everything except the clinical candidates is one of the last resorts, so I wish Addex good luck, which they (and all of us) will need.

Comments (14) + TrackBacks (0) | Category: Business and Markets | Drug Development

January 25, 2013

CETP, Alzheimer's, Monty Hall, and Roulette. And Goats.

Email This Entry

Posted by Derek

CETP, now there's a drug target that has incinerated a lot of money over the years. Here's a roundup of compounds I posted on back last summer, with links to their brutal development histories. I wondered here about what's going to happen with this class of compounds: will one ever make it as a drug? If it does, will it just end up telling us that there are yet more complications in human lipid handling that we didn't anticipate?

Well, Merck and Lilly are continuing their hugely expensive, long-running atempts to answer these questions. Here's an interview with Merck's Ken Frazier in which he sounds realistic - that is, nervous:

Merck CEO Ken Frazier, speaking in Davos on the sidelines of the World Economic Forum, said the U.S. drugmaker would continue to press ahead with clinical research on HDL raising, even though the scientific case so far remained inconclusive.

"The Tredaptive failure is another piece of evidence on the side of the scale that says HDL raising hasn't yet been proven," he said.

"I don't think by any means, though, that the question of HDL raising as a positive factor in cardiovascular health has been settled."

Tredaptive, of course, hit the skids just last month. And while its mechanism is not directly relevant to CETP inhibition (I think), it does illustrate how little we know about this area. Merck's anacetrapib is one of the ugliest-looking drug candidates I've ever seen (ten fluorines, three aryl rings, no hydrogen bond donors in sight), and Lilly's compound is only slightly more appealing.

But Merck finds itself having to bet a large part of the company's future in this area. Lilly, for its part, is betting similarly, and most of the rest of their future is being plunked down on Alzheimer's. And these two therapeutic areas have a lot in common: they're both huge markets that require huge clinical trials and rest on tricky fundamental biology. The huge market part makes sense; that's the only way that you could justify the amount of development needed to get a compound through. But the rest of the setup is worth some thought.

Is this what Big Pharma has come to, then? Placing larger and larger bets in hopes of a payoff that will make it all work out? If this were roulette, I'd have no trouble diagnosing someone who was using a Martingale betting system. There are a few differences, although I'm not sure how (or if) they cancel out For one thing, the Martingale gambler is putting down larger and larger amounts of money in an attempt to win the same small payout (the sum of the initial bet!) Pharma is at least chasing a larger jackpot. But the second difference is that the house advantage at roulette is a fixed 5.26% (at least in the US), which is ruinous, but is at least a known quantity.

But mentioning "known quantities" brings up a third difference. The rules of casino games don't change (unless an Ed Thorp shows up, which was a one-time situation). The odds of drug discovery are subject to continuous change as we acquire more knowledge; it's more like the Monty Hall Paradox. The question is, have the odds changed enough in CETP (or HDL-raising therapies in general) or Alzheimer's to make this a reasonable wager?

For the former, well, maybe. There are theories about what went wrong with torcetrapib (a slight raising of blood pressure being foremost, last I heard), and Merck's compound seems to be dodging those. Roche's failure with dacetrapib is worrisome, though, since the official reason there was sheer lack of efficacy in the clinic. And it's clear that there's a lot about HDL and LDL that we don't understand, both their underlying biology and their effects on human health when they're altered. So (to put things in terms of the Monty Hall problem), a tiny door has been opened a crack, and we may have caught a glimpse of some goat hair. But it could have been a throw rug, or a gorilla; it's hard to say.

What about Alzheimer's? I'm not even sure if we're learned as much as we have with CETP. The immunological therapies have been hard to draw conclusions from, because hey, it's the immune system. Every antibody is different, and can do different things. But the mechanistic implications of what we've seen so far are not that encouraging, unless, of course, you're giving interviews as an executive of Eli Lilly. The small-molecule side of the business is a bit easier to interpret; it's an unrelieved string of failures, one crater after another. We've learned a lot about Alzheimer's therapies, but what we've mostly learned is that nothing we've tried has worked much. In Monty Hall terms, the door has stayed shut (or perhaps has opened every so often to provide a terrifying view of the Void). At any rate, the flow of actionable goat-delivered information has been sparse.

Overall, then, I wonder if we really are at the go-for-the-biggest-markets-and-hope-for-the-best stage of research. The big companies are the ones with enough resources to tackle the big diseases; that's one reason we see them there. But the other reason is that the big diseases are the only things that the big companies think can rescue them.

Comments (4) + TrackBacks (0) | Category: Alzheimer's Disease | Cardiovascular Disease | Clinical Trials | Drug Development | Drug Industry History

January 24, 2013

Daniel Vasella Steps Down at Novartis

Email This Entry

Posted by Derek

So Daniel Vasella, longtime chairman of Novartis, has announced that he's stepping down. (He'll be replaced by Joerg Reinhardt, ex-Bayer, who was at Novartis before that). Vasella's had a long run. People on the discovery side of the business will remember him especially for the decision to base the company's research in Cambridge, which has led to (or at the very least accelerated the process of) many of the other big companies putting up sites there as well. Novartis is one of the most successful large drug companies in the world, avoiding the ferocious patent expiration woes of Lilly and AstraZeneca, and avoiding the gigantic merger disruptions of many others.

That last part, though, is perhaps an accident. Novartis did buy a good-sized stake in Roche at one point, and has apparently made, in vain, several overtures over the years to the holders of Roche's voting shares (many of whom are named "Hoffman-LaRoche" and live in very nice parts of Switzerland). And Vasella did oversee the 1996 merger between Sandoz and Ciba-Geigy that created Novartis itself, and he wasn't averse to big acquisitions per se, as the 2006 deal to buy Chiron shows.

It's those very deals, though, that have some investors cheering his departure. Reading that article, which is written completely from the investment side of the universe, is quite interesting. Try this out:

“He’s associated with what we can safely say are pretty value-destructive acquisitions,” said Eleanor Taylor-Jolidon, who manages about 400 million Swiss francs at Union Bancaire Privee in Geneva, including Novartis shares. “Everybody’s hoping that there’s going to be a restructuring now. I hope there will be a restructuring.” . . .

. . .“The shares certainly reacted to the news,” Markus Manns, who manages a health-care fund that includes Novartis shares at Union Investment in Frankfurt, said in an interview. “People are hoping Novartis will sell the Roche stake or the vaccines unit and use the money for a share buyback.”

Oh yes indeed, that's what we're all hoping for, isn't it? A nice big share buyback? And a huge restructuring, one that will stir the pot from bottom to top and make everyone wonder if they'll have a job or where it might be? Speed the day!

No, don't. All this illustrates the different world views that people bring to this business. The investors are looking to maximize their returns - as they should - but those of us in research see the route to maximum returns as going through the labs. That's what you'd expect from us, of course, but are we wrong? A drug company is supposed to find and develop drugs, and how else are you to do that? The investment community might answer that differently: a public drug company, they'd say, is like any other public company. It is supposed to produce value for its shareholders. If it can do that by producing drugs, then great, everything's going according to plan - but if there are other more reliable ways to produce that value, then the company should (must, in fact) avail itself of them.

And there's the rub. Most methods of making a profit are more reliable than drug discovery. Our returns on invested capital for internal projects are worrisome. Even when things work, it's a very jumpy, jerky business, full of fits and starts, with everything new immediately turning into a ticking bomb of a wasting asset due to patent expiry. Some investors understand this and are willing to put up with it in the hopes of getting in on something big. Other investors just want the returns to be smoother and more predictable, and are impatient for the companies to do something to make that happen. And others just avoid us entirely.

Comments (18) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

January 23, 2013

Eating A Whole Bunch of Random Compounds

Email This Entry

Posted by Derek

Reader Andy Breuninger, from completely outside the biopharma business, sends along what I think is an interesting question, and one that bears on a number of issues:

A question has been bugging me that I hope you might answer.

My understanding is that a lot of your work comes down to taking a seed molecule and exploring a range of derived molecules using various metrics and tests to estimate how likely they are to be useful drugs.

My question is this: if you took a normal seed molecule and a standard set of modifications, generated a set of derived molecules at random, and ate a reasonable dose of each, what would happen? Would 99% be horribly toxic? Would 99% have no effect? Would their effects be roughly the same or would one give you the hives, another nausea, and a third make your big toe hurt?

His impression of drug discovery is pretty accurate. It very often is just that: taking one or more lead compounds and running variations on them, trying to optimize potency, specificity, blood levels/absorption/clearance, toxicology, and so on. So, what do most of these compounds do in vivo?

My first thought is "Depends on where you start". There are several issues: (1) We tend to have a defined target in mind when we pick a lead compound, or (if it's a phenotypic assay that got us there), we have a defined activity that we've already seen. So things are biased right from the start; we're already looking at a higher chance of biological activity than you'd have by randomly picking something out of a catalog or drawing something on a board.

And the sort of target can make a big difference. There are an awful lot of kinase enzymes, for example, and compounds tend to cross-react with them, at least in the nearby families, unless you take a lot of care to keep that from happening. Compounds for the G-protein coupled biogenic amines receptors tend to do that, too. On the other hand, you have enzymes like the cytochromes and binding sites like the aryl hydrocarbon receptor - these things are evolved to recognize all sorts of structually disparate stuff. So against the right (or wrong!) sort of targets, you could expect to see a wide range of potential side activities, even before hitting the random ones.

(2) Some structural classes have a lot more biological activity than others. A lot of small-molecule drugs, for example, have some sort of basic amine in them. That's an important recognition element for naturally occurring substances, and we've found similar patterns in our own compounds. So something without nitrogens at all, I'd say, has a lower chance of being active in a living organism. (Barry Sharpless seems to agree with this). That's not to say that there aren't plenty of CHO compounds that can do you harm, just that there are proportionally more CHON ones that can.

Past that rough distinction, there are pharmacophores that tend to hit a lot, sometimes to the point that they're better avoided. Others are just the starting points for a lot of interesting and active compounds - piperazines and imidazoles are two cores that come to mind. I'd be willing to bet that a thousand random piperazines would hit more things than a thousand random morpholines (other things being roughly equal, like molecular weight and polarity), and either of them would hit a lot more than a thousand random cyclohexanes.

(3) Properties can make a big difference. The Lipinski Rule-of-Five criteria come in for a lot of bashing around here, but if I were forced to eat a thousand random compounds that fit those cutoffs, versus having the option to eat a thousand random ones that didn't, I sure know which ones I'd dig my spoon into.

And finally, (4): the dose makes the poison. If you go up enough in dose, it's safe to say that you're going to see an in vivo response to almost anything, including plenty of stuff at the supermarket. Similarly, I could almost certainly eat a microgram of any compound we have in our company's files with no ill effect, although I am not motivated to put that idea to the test. Same goes for the time that you're exposed. A lot of compounds are tolerated for single-dose tox but fail at two weeks. Compounds that make it through two weeks don't always make it to six months, and so on.

How closely you look makes the poison, too. We find that out all the time when we do animal studies - a compound that seems to cause no overt effects might be seen, on necropsy, to have affected some internal organs. And one that doesn't seem to have any visible signs on the tissues can still show effects in a full histopathology workup. The same goes for blood work and other analyses; the more you look, the more you'll see. If you get down to gene-chip analysis, looking at expression levels of thousands of proteins, then you'd find that most things at the supermarket would light up. Broccoli, horseradish, grapefruit, garlic and any number of other things would kick a full expression-profiling assay all over the place.

So, back to the question at hand. My thinking is that if you took a typical lead compound and dosed it at a reasonable level, along with a large set of analogs, then you'd probably find that if any of them had overt effects, they would probably have a similar profile (for good or bad) to whatever the most active compound was, just less of it. The others wouldn't be as potent at the target, or wouldn't reach the same blood levels. The chances of finding some noticeable but completely different activity would be lower, but very definitely non-zero, and would be wildly variable depending on the compound class. These effects might well cluster into the usual sorts of reactions that the body has to foreign substances - nausea, dizziness, headache, and the like. Overall, odds are that most of the compounds wouldn't show much, not being potent enough at any given target, or getting high enough blood levels to show something, but that's also highly variable. And if you looked closely enough, you'd probably find that that all did something, at some level.

Just in my own experience, I've seen one compound out of a series of dopamine receptor ligands suddenly turn up as a vasodilator, noticeable because of the "Rudolph the Red-Nosed Rodent" effect (red ears and tail, too). I've also seen compound series where they started crossing the blood-brain barrier more more effectively at some point, which led to a sharp demarcation in the tolerability studies. And I've seen many cases, when we've started looking at broader counterscreens, where the change of one particular functional group completely knocked a compound out of (or into) activity in some side assay. So you can never be sure. . .

Comments (22) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharma 101 | Pharmacokinetics | Toxicology

January 22, 2013

The Theology of Ligand Efficiency

Email This Entry

Posted by Derek

So in my post the other day about halogen bonds, I mentioned my unease at sticking in things like bromine and iodine atoms, because of the molecular weight penalty involved. Now, it's only a penalty if you're thinking in terms of ligand efficiency - potency per size of the molecule. I think that it's a very useful concept - one that was unheard of when I started in the industry, but which has now made a wide impression. The idea is that you should try, as much as possible, to make every part of your molecule worth something. Don't hang a chain off unless you're getting binding energy for it, and don't hang a big group off unless you're getting enough binding energy to make it worthwhile.

But how does one measure "worthwhile", or measure ligand efficiency in general? There are several schools of thought. One uses potency divided by molecular weight - there are different ways to make this come out to some sort of standard number, but that's the key operation. Another way, though, is to use potency divided by number of heavy atoms. These two scales will give you answers that are quite close to each other if you're just working in the upper reaches of the periodic table - there's not much difference between carbon, nitrogen, and oxygen. Sulfur will start throwing things off, as will chlorine But where the scales really give totally different answers, at least in common med-chem practice, is with bromine and iodine atoms. A single bromine (edit: fixed from earlier "iodine") weighs as much as a benzene ring, so the molecular-weight-based calculation takes a torpedo, while the heavy atom count just registers one more of the things.

For that very reason, I've been in the molecular-weight camp. But TeddyZ of Practical Fragments showed up in the comments to the halogen bond post, recommending arguments for the other side. But now that I've checked those out, I'm afraid that I still don't find them very convincing.

That's because the post he's referring to makes the case against simple molecular weight cutoffs alone. I'm fine with that. There's no way that you can slice things up by a few mass units here and there in any meaningful way. But the issue here isn't just molecular weight, it's activity divided by weight, and in all the cases shown, the ligand efficiency for the targets of these compounds would have gone to pieces if the "smaller" analog were picked. From a ligand efficiency standpoint, these examples are straw men.

So I still worry about bromine and iodine. I think that they hurt a compound's properties, and that treating them as "one heavy atom", as if they were nitrogens, ignores that. Now, that halogen bond business can, in some cases, make up for that, but medicinal chemists should realize the tradeoffs they're making, in this case as in all the others. I wouldn't, for example, rule out an iodo compound as a drug candidate, just because it's an iodo compound. But that iodine had better be earning its keep (and probably would be doing so via a halogen bond). It has a lot to earn back, too, considering the possible effects on PK and compound stability. Those would be the first things I would check in detail if my iodo candidate led the list in the other factors, like potency and selectivity. Then I'd get it into tox as soon as possible - I have no feel whatsoever for how iodine-substituted compounds act in whole-animal tox studies, and I'd want to find out in short order. That, in fact, is my reaction to unusual structures of many kinds. Don't rule them out a priori; but get to the posteriori part, where you have data, as quickly as possible.

So, thoughts on heavy atoms? Are there other arguments to make in favor of ligand efficiency calculated that way, or do most people use molecule weight?

Comments (26) + TrackBacks (0) | Category: Drug Assays | Drug Development

January 21, 2013

That Many Compounds in Development? Really?

Email This Entry

Posted by Derek

So PhRMA has a press release out on the state of drug research, but it's a little hard to believe. This part, especially:

The report, developed by the Analysis Group and supported by PhRMA, reveals that more than 5,000 new medicines are in the pipeline globally. Of these medicines in various phases of clinical development, 70 percent are potential first-in-class medicines, which could provide exciting new approaches to treating disease for patients.

This set off discussion on Twitter and elsewhere about how these number could have been arrived at. Here's the report itself (PDF), and looking through it provides a few more details Using figures that show up in the body of the report, that looks like between 2164 compounds in Phase I, 2329 in Phase II, and 833 in Phase III. Of those, by far the greatest number are in oncology, where they have 1265, 1507, and 288 in Phase I, II, and III, respectively. Second is infectious disease (304/289/135), and third is neurology (256/273/74). It's worth noting that "Psychiatry" is a separate category all its own, by the way.

An accompanying report (PDF) gives a few more specific figures. It claims, among other things, 66 medicines currently in clinical trials for Hepatitis C, 61 projects for ALS, and 158 for ovarian cancer. Now, it's good to have the exact numbers broken down. But don't those seem rather high?

Here's the section on how these counts were obtained:

Except where otherwise noted, data were obtained from EvaluatePharma, a proprietary commercial database with coverage of over 4,500 companies and approximately 50,000 marketed and pipeline products (including those on-market, discontinued, and in development), and containing historical data from 1986 onward. Pipeline information is available for each stage of development, defined as: Research Project, Preclinical, Phase I, II, III, Filed, and Approved. EvaluatePharma collects and curates information from publicly available sources and contains drug-related information such as company sponsor and therapy area. The data were downloaded on December 12, 2011.

While our interest is in drugs in development that have the potential to become new treatment options for U.S. patients, it is difficult to identify ex ante which drugs in development may eventually be submitted for FDA approval – development activity is inherently global, although regulatory review, launch, and marketing are market-specific. Because most drugs are intended for marketing in the U.S., the largest drug market in the world, we have not excluded any drugs in clinical development (i.e., in Phases I, II, or III). However, in any counts of drugs currently in regulatory review, we have excluded drugs that were not filed with the FDA.

Unless otherwise noted, the analysis in this report is restricted to new drug applications for medicines that would be reviewed as new molecular entities (NMEs) and to new indications for already approved NMEs. . .

Products are defined as having a unique generic name, such that a single product is counted exactly once (regardless of the number of indications being pursued).

That gives some openings for the higher-than-expected numbers. For one, those databases of company activities always seem to run on the high side, because many companies keep things listed as development compounds when they're really ceased any work on them (or in extreme cases, never even really started work at all). Second, there may be some oddities from other countries in there, where the standards for press releases are even lower. But we can rule out a third possibility, that single compounds are being counted across multiple indications. I think that the first-in-class figures are surely pumped up by the cases where there are several compounds all in development for the same (as yet unrealized) target, though. Finally, I think that there's some shuffling between "compounds" and "projects" taking place, with the latter having even larger figures.

I'm going to see in another post if I can break down any of these numbers further - who know, maybe there are a lot more compounds in development than I think. But my first impression is that these numbers are much higher than I would have guessed. It would be very helpful if someone at PhRMA would release a list of the compounds they've counted from one of these indications, just to give us an idea. Any chance of that?

Comments (21) + TrackBacks (0) | Category: Clinical Trials | Drug Development

January 16, 2013

Drug Discovery With the Most Common Words

Email This Entry

Posted by Derek

I got caught up this morning in a challenge based on this XKCD strip, the famous "Up-Goer Five". If that doesn't ring a bell, have a look - it's an attempt to describe a Saturn V rocket while using only the most common 1000 words in English. You find, when you do that, that some of the words you really want to be able to use are not on the list - in the case of the Saturn V, "rocket" is probably the first obstacle of that sort you run into, thus "Up-Goer".

So I noticed on Twitter that people are trying to describe their own work using the same vocabulary list, and I thought I'd give it a try. (Here's a handy text editor that will tell you when you've stepped off the path). I quickly found that "lab", "chemical", "test", and "medicine" are not on the list, so there was enough of a challenge to keep me entertained. Here, then, is drug discovery and development, using the simple word list:

I find things that people take to get better when they are sick. These are hard to make, and take a lot of time and money. When we have a new idea, most of them don't actually work, because we don't know everything we need to about how people get sick in the first place. It's like trying to fix something huge, in the dark, without a book to help.

So we have to try over and over, and we often get surprised by what happens. We build our new stuff by making its parts bigger or smaller, or we join a new piece to one end, or we change one part out for another to see if it works better. Some of our new things are not strong enough. Others break down too fast or stay in the body too long, and some would do too many other things to the people who take them (and maybe even make them more sick than they were). To try to fix all of these at the same time is not easy, of course. When we think we've found one, it has to get past all of those problems, and then we have to be able to make a lot of it exactly the same way every time so that we can go to the next part.

And that part is where most of the money and time come in. First, we try our best idea out on a small animal to make sure that it works like we think it will. Only after that we can ask people to take it. First people who are not sick try it, just to make sure, then a few sick ones, then a lot of sick ones of many types. Then, if it still works, we take all our numbers and ask if it is all right to let everyone who is sick buy our new stuff, and to let a doctor tell them to take it.

If they say yes, we have to do well with it as fast as we can, which doesn't always work out, either. That's because there can still be a problem even after all that work. Even if there isn't, after some time (more than a year or two) someone else can let these people buy it, too, and for less. While all that is going on, we are back trying to find another new one before this one runs out, and we had better.

Not everyone likes us. Our stuff can be a lot of money for people. It may not work as well as someone wants it to, or they may not like how we talk with their doctor (and they may have a point there). Even so, many people have no idea of what we do, how hard it is, or how long it can take. But no one has got any other way to do it, at least not yet!

There, that's fairly accurate, and it even manages to sound like me in some parts. Pity there's no Latin on the list, though.

Update: here are some more people describing their work, in the comments over at Just Like Cooking. And I should note that someone has already remarked to me that "This is an explanation that even Marcia Angell could understand".

Comments (32) + TrackBacks (0) | Category: Drug Development

December 18, 2012

Lilly's Two-Drugs-a-Year Prediction

Email This Entry

Posted by Derek

Drug research consultant Bernard Munos popped in the comments here the other day and mentioned this story from 2010 in the Indiana Business Journal. That's where we can find Eli Lilly's prediction that they were going to start producing two new drugs per year, starting in 2013. Since that year is nearly upon us, how's that looking?

Not too well. Back in 2010, Lilly's CEO (John Lechleiter) was talking up the company's plans to weather its big patent expirations, including that two-a-year forecast. Since then, the company has had a brutal string of late-stage clinical failures. In addition to the ones in that article, Lilly's had to withdraw Xigris, and results for edivoxetine are mixed. No wonder we're hearing so much about the not-too-impressive Alzheimer's drugs from them.

But, as I said here, what would I have done differently, were I to have had to misfortune of having to run Eli Lilly? I might not have placed such a big bet on Alzheimer's, but I probably would have found equally unprofitable ways to spend the money. (And in the end, the company deserves credit for taking on such an intractable disease - just no one tell Marcia Angell; she doesn't think anyone in the drug industry does any such thing).

About the only thing I'm sure of is that I wouldn't have gone around telling people that we were going to start launching two drugs a year. No one's ever been able to keep to that pace, not even in the best of times, and these sure aren't the best of times. It's tempting to think about telling the investors and the analysts that we're going to work as hard as we can, using our brains as much as we can, and we're going to launch what we're going to launch, when it's darn well ready to be launched. And past that, no predictions, OK? The only problem is, the stock market wouldn't stand for it. Ken Frazier at Merck tried something a bit like this, and it sure didn't seem to last long. Is happy talk what everyone would rather hear?

Comments (14) + TrackBacks (0) | Category: Business and Markets | Drug Development

December 3, 2012

Marcia Angell's Interview: I Just Can't

Email This Entry

Posted by Derek

I have tried to listen to this podcast with Marcia Angell, on drug companies and their research, but I cannot seem to make it all the way through. I start shouting at the screen, at the speakers, at the air itself. In case you're wondering about whether I'm overreacting, at one point she makes the claim that drug companies don't do much innovation, because most of our R&D budget is spent on clinical trials, and "everyone knows how to do a clinical trial". See what I mean?

Angell has many very strongly held opinions on the drug business. But her take on R&D has always seemed profoundly misguided to me. From what I can see, she thinks that identifying a drug target is the key step, and that everything after that is fairly easy, fairly cheap, and very, very profitable. This is not correct. Really, really, not correct. She (and those who share this worldview, such as her co-author) believe that innovation has fallen off in the industry, but that this has happened mostly by choice. Considering the various disastrously expensive failures the industry has gone through while trying to expand into new diseases, new indications, and new targets, I find this line of argument hard to take.

So, I see, does Alex Tabarrok. I very much enjoyed that post; it does some of the objecting for me, and illustrates why I have such a hard time dealing point-by-point with Angell and her ilk. The misconceptions are large, various, and ever-shifting. Her ideas about drug marketing costs, which Tabarrok especially singles out, are a perfect example (and see some of those other links to my old posts, where I make some similar arguments to his).

So no, I don't think that Angell has changed her opinions much. I sure haven't changed mine.

Comments (59) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices | Why Everyone Loves Us

November 30, 2012

A Broadside Against The Way We Do Things Now

Email This Entry

Posted by Derek

There's a paper out in Drug Discovery Today with the title "Is Poor Research the Cause of Declining Productivity in the Drug Industry? After reviewing the literature on phenotypic versus target-based drug discovery, the author (Frank Sams-Dodd) asks (and has asked before):

The consensus of these studies is that drug discovery based on the target-based approach is less likely to result in an approved drug compared to projects based on the physiological- based approach. However, from a theoretical and scientific perspective, the target-based approach appears sound, so why is it not more successful?

He makes the points that the target-based approach has the advantages of (1) seeming more rational and scientific to its practitioners, especially in light of the advances in molecular biology over the last 25 years, and (2) seeming more rational and scientific to the investors:

". . .it presents drug discovery as a rational, systematic process, where the researcher is in charge and where it is possible to screen thousands of compounds every week. It gives the image of industrialisation of applied medical research. By contrast, the physiology-based approach is based on the screening of compounds in often rather complex systems with a low throughput and without a specific theory on how the drugs should act. In a commercial enterprise with investors and share-holders demanding a fast return on investment it is natural that the drug discovery efforts will drift towards the target-based approach, because it is so much easier to explain the process to others and because it is possible to make nice diagrams of the large numbers of compounds being screened.

This is the "Brute Force bias". And he goes on to another key observation: that this industrialization (or apparent industrialization) meant that there were a number of processes that could be (in theory) optimized. Anyone who's been close to a business degree knows how dear process optimization is to the heart of many management theorists, consultants, and so on. And there's something to that, if you're talking about a defined process like, say, assembling pickup trucks or packaging cat litter. This is where your six-sigma folks come in, your Pareto analysis, your Continuous Improvement people, and all the others. All these things are predicated on the idea that there is a Process out there.

See if this might sound familiar to anyone:

". . .the drug dis- covery paradigm used by the pharmaceutical industry changed from a disease-focus to a process-focus, that is, the implementation and organisation of the drug discovery process. This meant that process-arguments became very important, often to the point where they had priority over scientific considerations, and in many companies it became a requirement that projects could conform to this process to be accepted. Therefore, what started as a very sensible approach to drug discovery ended up becoming the requirement that all drug dis- covery programmes had to conform to this approach – independently of whether or not sufficient information was available to select a good target. This led to dogmatic approaches to drug discovery and a culture developed, where new projects must be presented in a certain manner, that is, the target, mode-of-action, tar- get-validation and screening cascade, and where the clinical manifestation of the disease and the biological basis of the disease at systems-level, that is, the entire organism, were deliberately left out of the process, because of its complexity and variability.

But are we asking too much when we declare that our drugs need to work through single defined targets? Beyond that, are we even asking too much when we declare that we need to understand the details of how they work at all? Many of you will have had such thoughts (and they've been expressed around here as well), but they can tend to sound heretical, especially that second one. But that gets to the real issue, the uncomfortable, foot-shuffling, rather-think-about-something-else question: are we trying to understand things, or are we trying to find drugs?

"False dichotomy!", I can hear people shouting. "We're trying to do both! Understanding how things work is the best way to find drugs!" In the abstract, I agree. But given the amount there is to understand, I think we need to be open to pushing ahead with things that look valuable, even if we're not sure why they do what they do. There were, after all, plenty of drugs discovered in just that fashion. A relentless target-based environment, though, keeps you from finding these things at all.

What it does do, though, is provide vast opportunities for keeping everyone busy. And not just "busy" in the sense of working on trivia, either: working out biological mechanisms is very, very hard, and in no area (despite decades of beavering away) can we say we've reached the end and achieved anything like a complete picture. There are plenty of areas that can and will soak up all the time and effort you can throw at them, and yield precious little in the way of drugs at the end of it. But everyone was working hard, doing good science, and doing what looked like the right thing.

This new paper spends quite a bit of time on the mode-of-action question. It makes the point that understanding the MoA is something that we've imposed on drug discovery, not an intrinsic part of it. I've gotten some funny looks over the years when I've told people that there is no FDA requirement for details of a drug's mechanism. I'm sure it helps, but in the end, it's efficacy and safety that carry the day, and both of those are determined empirically: did the people in the clinical trials get better, or worse?

And as for those times when we do have mode-of-action information, well, here are some fighting words for you:

". . .the ‘evidence’ usually involves schematic drawings and flow-diagrams of receptor complexes involving the target. How- ever, it is almost never understood how changes at the receptor or cellular level affect the phy- siology of the organism or interfere with the actual disease process. Also, interactions between components at the receptor level are known to be exceedingly complex, but a simple set of diagrams and arrows are often accepted as validation for the target and its role in disease treatment even though the true interactions are never understood. What this in real life boils down to is that we for almost all drug discovery programmes only have minimal insight into the mode-of-action of a drug and the biological basis of a disease, meaning that our choices are essentially pure guess-work.

I might add at this point that the emphasis on defined targets and mode of action has been so much a part of drug discovery in recent times that it's convinced many outside observers that target ID is really all there is to it. Finding and defining the molecular target is seen as the key step in the whole process; everything past that is just some minor engineering (and marketing, naturally). That fact that this point of view is a load of fertilizer has not slowed it down much.

I think that if one were to extract a key section from this whole paper, though, this one would be a good candidate:

". . .it is not the target-based approach itself that is flawed, but that the focus has shifted from disease to process. This has given the target-based approach a dogmatic status such that the steps of the validation process are often conducted in a highly ritualised manner without proper scientific analysis and questioning whether the target-based approach is optimal for the project in question.

That's one of those "Don't take this in the wrong way, but. . ." statements, which are, naturally, always going to be taken in just that wrong way. But how many people can deny that there's something to it? Almost no one denies that there's something not quite right, with plenty of room for improvement.

What Sams-Dodd has in mind for improvement is a shift towards looking at diseases, rather than targets or mechanisms. For many people, that's going to be one of those "Speak English, man!" moments, because for them, finding targets is looking at diseases. But that's not necessarily so. We would have to turn some things on their heads a bit, though:

In recent years there have been considerable advances in the use of automated processes for cell-culture work, automated imaging systems for in vivo models and complex cellular systems, among others, and these developments are making it increasingly possible to combine the process-strengths of the target-based approach with the disease-focus of the physiology-based approach, but again these technologies must be adapted to the research question, not the other way around.

One big question is whether the investors funding our work will put up with such a change, or with such an environment even if we did establish it. And that gets back to the discussion of Andrew Lo's securitization idea, the talk around here about private versus public financing, and many other topics. Those I'll reserve for another post. . .

Comments (30) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | Who Discovers and Why

November 29, 2012

When Drug Launches Go Bad

Email This Entry

Posted by Derek

For those connoisseurs of things that have gone wrong, here's a list of the worst drug launches of recent years. And there are some rough ones in there, such as Benlysta, Provenge, and (of course) Makena. And from an aesthetic standpoint, it's hard not to think that if you name your drug Krystexxa that you deserve what you get. Read up and try to avoid being part of such a list yourself. . .

Comments (8) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices

Roche Repurposes

Email This Entry

Posted by Derek

Another drug repurposing initiative is underway, this one between Roche and the Broad Institute. The company is providing 300 failed clinical candidates to be run through new assays, in the hopes of finding a use for them.

I hope something falls out of this, because any such compounds will naturally have a substantial edge in further development. They should all have been through toxicity testing, they've had some formulations work done on them, a decent scale-up route has been identified, and so on. And many of these candidates fell out in Phase II, so they've even been in human pharmacokinetics.

On the other hand (there's always another hand), you could also say that this is just another set of 300 plausible-looking compounds, and what does a 300-compound screening set get you? The counterargument to this is that these structures have not only been shown to have good absorption and distribution properties (no small thing!), they've also been shown to bind well to at least one target, which means that they may well be capable of binding well to other similar motifs in other active sites. But the counterargument to that is that now you've removed some of those advantages in the paragraph above, because any hits will now come with selectivity worries, since they come with guaranteed activity against something else.

This means that the best case for any repurposed compound is for its original target to be good for something unanticipated. So that Roche collection of compounds might also be thought of as a collection of failed targets, although I doubt if there are a full 300 of those in there. Short of that, every repurposing attempt is going to come with its own issues. It's not that I think these shouldn't be tried - why not, as long as it doesn't cost too much - but things could quickly get more complicated than they might have seemed. And that's a feeling that any drug discovery researcher will recognize like an old, er, friend.

For more on the trickiness of drug repurposing, see John LaMattina here and here. And the points he raises get to the "as long as it doesn't cost too much" line in the last paragraph. There's opportunity cost involved here, too, of course. When the Broad Institute (or Stanford, or the NIH) screens old pharma candidates for new uses, they're doing what a drug company might do itself, and therefore possibly taking away from work that only they could be doing instead. Now, I think that the Broad (for example) already has a large panel of interesting screens set up, so running the Roche compounds through them couldn't hurt, and might not take that much more time or effort. So why not? But trying to push repurposing too far could end up giving us the worst of both worlds. . .

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

November 28, 2012

Every Tiny Detail

Email This Entry

Posted by Derek

Via Chemjobber, we have here an excellent example of how much detail you have to get into if you're seriously making a drug for the market. When you have to account for every impurity, and come up with procedures that generate the same ones within the same tight limits every time, this is the sort of thing you have to pay attention to: how you dry your compound. And how long. And why. Because if you don't, huge amounts of money (time, lost revenue, regulatory trouble, lawsuits) are waiting. . .

Comments (5) + TrackBacks (0) | Category: Analytical Chemistry | Chemical News | Drug Development

November 19, 2012

The Novartis Pipeline

Email This Entry

Posted by Derek

This would seem to be inviting the wrath of the Drug Development Gods, and man, are they a testy bunch: "Novartis could produce 14 or more new big-selling 'blockbuster' drugs within five years . . ."

I'll certainly wish them luck on that, and it certainly seems true that Novartis research has been productive. But think back - how many press releases have you seen over the years where Drug Company A predicts X number of big product launches in the next Y years? And how many of those schedules have ever quite worked out? The most egregious examples of this take the form of claiming that your new strategy/platform/native genius/good looks have now allowed you to deliver these things on some sort of regular schedule. When you hear someone talking about how even though they haven't been able to do anything like it in the past, they're going to start unleashing a great new drug product launch every year (or every 18 months, what have you) from here on out, run.

Now, Novartis isn't talking like this, and they have a much better chance of delivering on this than most, but still. Might it not be better just to creep up on people with all those great new products in hand, rather than risk disappointment?

Comments (10) + TrackBacks (0) | Category: Business and Markets | Drug Development

November 13, 2012

Nassim Taleb on Scientific Discovery

Email This Entry

Posted by Derek

There's an interesting article posted on Nassim Taleb's web site, titled "Understanding is a Poor Substitute for Convexity (Antifragility)". It was recommended to me by a friend, and I've been reading it over for its thoughts on how we do drug research. (This would appear to be an excerpt from, or summary of, some of the arguments in the new book Antifragile: Things That Gain from Disorder, which is coming out later this month).

Taleb, of course, is the author of The Black Swan and Fooled by Randomness, which (along with his opinions about the recent financial crises) have made him quite famous.

So this latest article is certainly worth reading, although much of it reads like the title, that is, written in fluent and magisterial Talebian. This blog post is being written partly for my own benefit, so that I make sure to go to the trouble of a translation into my own language and style. I've got my idiosyncracies, for sure, but I can at least understand my own stuff. (And, to be honest, a number of my blog posts are written in that spirit, of explaining things to myself in the process of explaining them to others).

Taleb starts off by comparing two different narratives of scientific discovery: luck versus planning. Any number of works contrast those two. I'd say that the classic examples of each (although Taleb doesn't reference them in this way) are the discovery of penicillin and the Manhattan Project. Not that I agree with either of those categorizations - Alexander Fleming, as it turns out, was an excellent microbiologist, very skilled and observant, and he always checked old culture dishes before throwing them out just to see what might turn up. And, it has to be added, he knew what something interesting might look like when he saw it, a clear example of Pasteur's quote about fortune and the prepared mind. On the other hand, the Manhattan Project was a tremendous feat of applied engineering, rather than scientific discovery per se. The moon landings, often used as a similar example, are also the exact sort of thing. The underlying principles of nuclear fission had been worked out; the question was how to purify uranium isotopes to the degree needed, and then how to bring a mass of the stuff together quickly and cleanly enough. These processes needed a tremendous amount of work (it wasn't obvious how to do either one, and multiple approaches were tried under pressure of time), but the laws of (say) gaseous diffusion were already known.

But when you look over the history of science, you see many more examples of fortunate discoveries than you see of planned ones. Here's Taleb:

The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics —even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.

"Opaque and nonlinear" just about sums up a lot of drug discovery and development, let me tell you. But Taleb goes on to say that "trial and error" is a misleading phrase, because it tends to make the two sound equivalent. What's needed is an asymmetry: the errors need to be as painless as possible, compared to the payoffs of the successes. The mathematical equivalent of this property is called convexity; a nonlinear convex function is one with larger gains than losses. (If they're equal, the function is linear). In research, this is what allows us to "harvest randomness", as the article puts it.

An example of such a process is biological evolution: most mutations are harmless and silent. Even the harmful ones will generally just kill off the one organism with the misfortune to bear them. But a successful mutation, one that enhances survival and reproduction, can spread widely. The payoff is much larger than the downside, and the mutations themselves come along for free, since some looseness is built into the replication process. It's a perfect situation for blind tinkering to pay off: the winners take over, and the losers disappear.

Taleb goes on to say that "optionality" is another key part of the process. We're under no obligation to follow up on any particular experiment; we can pick the one that worked best and toss the rest. This has its own complications, since we have our own biases and errors of judgment to contend with, as opposed to the straightforward questions of evolution ("Did you survive? Did you breed?"). But overall, it's an important advantage.

The article then introduces the "convexity bias", which is defined as the difference between a system with equal benefit and harm for trial and error (linear) and one where the upsides are higher (nonlinear). The greater the split between those two, the greater the convexity bias, and the more volatile the environment, the great the bias is as well. This is where Taleb introduces another term, "antifragile", for phenomena that have this convexity bias, because they're equipped to actually gain from disorder and volatility. (His background in financial options is apparent here). What I think of at this point is Maxwell's demon, extracting useful work from randomness by making decisions about which molecules to let through his gate. We scientists are, in this way of thinking, members of the same trade union as Maxwell's busy creature, since we're watching the chaos of experimental trials and natural phenomena and letting pass the results we find useful. (I think Taleb would enjoy that analogy). The demon is, in fact, optionality manifested and running around on two tiny legs.

Meanwhile, a more teleological (that is, aimed and coherent) approach is damaged under these same conditions. Uncertainty and randomness mess up the timelines and complicate the decision trees, and it just gets worse and worse as things go on. It is, by these terms, fragile.

Taleb ends up with seven rules that he suggests can guide decision making under these conditions. I'll add my own comments to these in the context of drug research.

(1) Under some conditions, you'd do better to improve the payoff ratio than to try to increase your knowledge about what you're looking for. One way to do that is to lower the cost-per-experiment, so that a relatively fixed payoff then is larger in comparison. The drug industry has realized this, naturally: our payoffs are (in most cases) somewhat out of our control, although the marketing department tries as hard as possible. But our costs per experiment range from "not cheap" to "potentially catastrophic" as you go from early research to Phase III. Everyone's been trying to bring down the costs of later-stage R&D for just these reasons.

(2) A corollary is that you're better off with as many trials as possible. Research payoffs, as Taleb points out, are very nonlinear indeed, with occasional huge winners accounting for a disproportionate share of the pool. If we can't predict these - and we can't - we need to make our nets as wide as possible. This one, too, is appreciated in the drug business, but it's a constant struggle on some scales. In the wide view, this is why the startup culture here in the US is so important, because it means that a wider variety of ideas are being tried out. And it's also, in my view, why so much M&A activity has been harmful to the intellectual ecosystem of our business - different approaches have been swallowed up, and they they disappear as companies decide, internally, on the winners.

And inside an individual company, portfolio management of this kind is appreciated, but there's a limit to how many projects you can keep going. Spread yourself too thin, and nothing will really have a chance of working. Staying close to that line - enough projects to pick up something, but not so many as to starve them all - is a full-time job.

(3) You need to keep your "optionality" as strong as possible over as long a time as possible - that is, you need to be able to hit a reset button and try something else. Taleb says that plans ". . .need to stay flexible with frequent ways out, and counter to intuition, be very short term, in order to properly capture the long term. Mathematically, five sequential one-year options are vastly more valuable than a single five-year option." I might add, though, that they're usually priced accordingly (and as Taleb himself well knows, looking for those moments when they're not priced quite correctly is another full-time job).

(4) This one is called "Nonnarrative Research", which means the practice of investing with people who have a history of being able to do this sort of thing, regardless of their specific plans. And "this sort of thing" generally means a lot of that third recommendation above, being able to switch plans quickly and opportunistically. The history of many startup companies will show that their eventual success often didn't bear as much relation to their initial business plan as you might think, which means that "sticking to a plan", as a standalone virtue, is overrated.

At any rate, the recommendation here is not to buy into the story just because it's a good story. I might draw the connection here with target-based drug discovery, which is all about good stories.

(5) Theory comes out of practice, rather than practice coming out of theory. Ex post facto histories, Taleb says, often work the story around to something that looks more sensible, but his claim is that in many fields, "tinkering" has led to more breakthroughs than attempts to lay down new theory. His reference is to this book, which I haven't read, but is now on my list.

(6) There's no built-in payoff for complexity (or for making things complex). "In academia," though, he says, "there is". Don't, in other words, be afraid of what look like simple technologies or innovations. They may, in fact, be valuable, but have been ignored because of this bias towards the trickier-looking stuff. What this reminds me of is what Philip Larkin said he learned by reading Thomas Hardy: never be afraid of the obvious.

(7) Don't be afraid of negative results, or paying for them. The whole idea of optionality is finding out what doesn't work, and ideally finding that out in great big swaths, so we can narrow down to where the things that actually work might be hiding. Finding new ways to generate negative results quickly and more cheaply, which can means new ways to recognize them earlier, is very valuable indeed.

Taleb finishes off by saying that people have criticized such proposals as the equivalent of buying lottery tickets. But lottery tickets, he notes, are terribly overpriced, because people are willing to overpay for a shot at a big payoff on long odds. But lotteries have a fixed upper bound, whereas R&D's upper bound is completely unknown. And Taleb gets back to his financial-crisis background by pointing out that the history of banking and finance points out the folly of betting against long shots ("What are the odds of this strategy suddenly going wrong?"), and that in this sense, research is a form of reverse banking.

Well, those of you out there who've heard the talk I've been giving in various venues (and in slightly different versions) the last few months may recognize that point, because I have a slide that basically says that drug research is the inverse of Wall Street. In finance, you try to lay off risk, hedge against it, amortize it, and go for the steady payoff strategies that (nonetheless) once in a while blow up spectacularly and terribly. Whereas in drug research, risk is the entire point of our business (a fact that makes some of the business-trained people very uncomfortable). We fail most of the time, but once in a while have a spectacular result in a good direction. Wall Street goes short risk; we have to go long.

I've been meaning to get my talk up on YouTube or the like; and this should force me to finally get that done. Perhaps this weekend, or over the Thanksgiving break, I can put it together. I think it fits in well with what Taleb has to say.

Comments (27) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

October 30, 2012

JQ1: Giving Up a Fortune?

Email This Entry

Posted by Derek

The Atlantic is out with a list of "Brave Thinkers", and one of them is Jay Bradner at Harvard Medical School. He's on there for JQ1, a small-molecule bromodomain ligand that was reported in 2010. (I note, in passing, that once again nomenclature has come to the opposite of our rescue, since bromodomains have absolutely nothing to do with bromine, in contrast to 98% of all the other words that begin with "bromo-")

These sorts of compounds have been very much in the news recently, as part of the whole multiyear surge in epigenetic research. Drug companies, naturally, are looking to the epigenetic targets that might be amenable to small-molecule intervention, and bromodomains seem to qualify (well, some of them do, anyway).

At any rate, JQ1 is a perfectly reasonable probe compound for bromodomain studies, but it got a lot of press a couple of months ago as a potential male contraceptive. I found all that wildly premature - a compound like this one surely sets off all kinds of effects in vivo, and disruption of spermatogenesis is only one of them. Note (PDF) that it hits a variety of bromodomain subtypes, and we only have the foggiest notion of what most of these are doing in real living systems.

The Atlantic, for its part, makes much of Bradner's publishing JQ1 instead of patenting it:

The monopoly on developing the molecule that Bradner walked away from would likely have been worth a fortune (last year, the median value for U.S.-based biotech companies was $370 million). Now four companies are building on his discovery—which delights Bradner, who this year released four new molecules. “For years, drug discovery has been a dark art performed behind closed doors with the shades pulled,” he says. “I would be greatly satisfied if the example of this research contributed to a change in the culture of drug discovery.”

But as Chemjobber rightly says, the idea that Bradner walked away from a fortune is ridiculous. JQ1 is not a drug, nor is it ever likely to become a drug. It has inspired research programs to find drugs, but they likely won't look much (or anything) like JQ1, and they'll do different things (for one, they'll almost surely be more selective). In fact, chasing after that sort of selectivity is one of the things that Bradner's own research group appears to be doing - and quite rightly - while his employer (Dana-Farber) is filing patent applications on JQ1 derivatives. Quite rightly.

Patents work differently in small-molecule drug research than most people seem to think. (You can argue, in fact, that it's one of the areas where the system works most like it was designed to, as opposed to often-abominable patent efforts in software, interface design, business methods, and the like). People who've never had to work with them have ideas about patents being dark, hidden boxes of secrets, but one of the key things about a patent is disclosure. You have to tell people what your invention is, what it's good for, and how to replicate it, or you don't have a valid patent.

Admittedly, there are patent applications that do not make all of these steps easy - a case in point would be the ones from Exelixis - I wrote here about my onetime attempts to figure out the structures of some of their lead compounds from their patent filings. Not long ago I had a chance to speak with someone who was there at the time, and he was happy to hear that I'd come up short, saying that this had been exactly the plan). But at the same time, all their molecules were in there, along with all the details of how to make them. And the claims of the patents detailed exactly why they were interested in such compounds, and what they planned to do with them as drugs. You could learn a lot about what Exelixis was up to; it was just that finding out the exact structure of the clinical candidate that was tricky. A patent application on JQ1 would have actually ended up disclosing most (or all) of what the publication did.

I'm not criticizing Prof. Bradner and his research group here. He's been doing excellent work in this area, and his papers are a pleasure to read. But the idea that Harvard Medical School and Dana-Farber would walk away from a pharma fortune is laughable.

Comments (33) + TrackBacks (0) | Category: Cancer | Chemical Biology | Drug Development | Patents and IP

October 17, 2012

Zafgen's Epoxide Adventure

Email This Entry

Posted by Derek

Zafgen is a startup in the Boston area that's working on a novel weight-loss drug called beloranib. Their initial idea was that they were inhibiting angiogenesis in adipose tissue, through inhibition of methionine aminopeptidase-2. But closer study showed that while the compound was indeed causing significant weight loss in animal models, it wasn't through that mechanism. Blood vessel formation wasn't affected, but the current thinking is that Met-AP2 inhibition is affecting fatty acid synthesis and causing more usage of lipid stores.

But when they say "novel", they do mean it. Behold one of the more unlikely-looking drugs to make it through Phase I:
Beloranib.png
Natural-product experts in the audience might experience a flash of recognition. That's a derivative of fumagillin, a compound from Aspergillus that's been kicking around for many years now. And its structure brings up a larger point about reactive groups in drug molecules, the kind that form covalent bonds with their targets.

I wrote about covalent drugs here a few years ago, and the entire concept has been making a comeback. (If anyone was unsure about that, Celgene's purchase of Avila was the convincer). Those links address the usual pros and cons of the idea: on the plus side, slow off rates are often beneficial in drug mechanisms, and you don't get much slower than covalency. On the minus side, you have to worry about selectivity even more, since you really don't want to go labeling across the living proteome. You have the mechanisms of the off-target proteins to worry about once you shut them down, and you also have the ever-present fear of setting off an immune response if the tagged protein ends up looking sufficiently alien.

I'm not aware of any published mechanistic studies of beloranib, but it is surely another one of this class, with those epoxides. (Looks like it's thought to go after a histidine residue, by analogy to fumagillin's activity against the same enzyme). But here's another thing to take in: epoxides are not as bad as most people think they are. We organic chemists see them and think that they're just vibrating with reactivity, but as electrophiles, they're not as hot as they look.

That's been demonstrated by several papers from the Cravatt labs at Scripps. (He still is at Scripps, right? You need a scorecard these days). In this work, they showed that some simple epoxides, when exposed to entire proteomes, really didn't label many targets at all compared to the other electrophiles on their list. And here, in an earlier paper, they looked at fumagillin-inspired spiroexpoxide probes specifically, and found an inhibitor of phosphoglycerate mutase 1. But a follow-up SAR study of that structure showed that it was very picky indeed - you had to have everything lined up right for the epoxide to react, and very close analogs had no effect. Taken together, the strong implication is that epoxides can be quite selective, and thus can be drugs. You still want to be careful, because the toxicology literature is still rather vocal on the subject, but if you're in the less reactive/more structurally complex/more selective part of that compound space, you might be OK. We'll see if Zafgen is.

Comments (21) + TrackBacks (0) | Category: Chemical Biology | Diabetes and Obesity | Drug Development

October 11, 2012

IGFR Therapies Wipe Out. And They're Not Alone.

Email This Entry

Posted by Derek

Here's a look at something that doesn't make many headlines: the apparent failure of an entire class of potential drugs. The insulin-like growth factor 1 receptor (IGF-1R) has been targeted for years now, from a number of different angles. There have been several antibodies tried against it, and companies have also tried small molecule approaches such as inhibiting the associated receptor kinase. (I was on such a project myself a few years back). So far, nothing has worked out.

And as that review shows, this was a very reasonable-sounding idea. Other growth factor receptors have been successful cancer targets (notably EGFR), and there was evidence of IGFR over-expression in several widespread cancer types (and evidence from mouse models that inhibiting it would have the desired effect). The rationale here was as solid as anything we have, but reality has had other ideas:

It is hardly surprising that even some of the field's pioneers are now pessimistic. “In the case of IGF-1R, one can protest that proper studies have not yet been carried out,” writes Renato Baserga, from the department of Cancer Biology, Thomas Jefferson University in Philadelphia. (J. Cell. Physiol., doi:10.1002/jcp.24217). A pioneer in IGF-1 research, Baserga goes on to list some avenues that may still be promising, such as targeting the receptor to prevent metastases in colorectal cancer patients. But in the end, he surmises: “These excuses are poor excuses, [they are] an attempt to reinvigorate a procedure that has failed.” Saltz agrees. “This may be the end of the story,” he says. “At one point, there were more than ten companies developing these drugs; now this may be the last one that gets put on the shelf.”

But, except for articles like these in journals like Nature Biotechnology, or mentions on web sites like this one, no one really hears about this sort of thing. We've talked about this phenomenon before; there's a substantial list of drug targets that looked very promising, got a lot of attention for years, but never delivered any sort of drug at all. Negative results don't make for much of a headline in the popular press, especially when the story develops over a multi-year period.

I think it would be worthwhile for people to hear about this, though. I once talked with someone who was quite anxious about an upcoming plane trip; they were worried on safety grounds. It occurred to me that if there were a small speaker on this person's desk announcing all the flights that had landed safely around the country (or around the world), that a few days of that might actually have an effect. Hundreds, thousands of announcements, over and over: "Flight XXX has landed safely in Omaha. Flight YYY has landed safely in Seoul. Flight ZZZ has landed safely in Amsterdam. . ." Such a speaker system wouldn't shut up for long suring any given day, that's for sure, and it would emphasize the sheer volume of successful air travel that takes place each day, over and over.

On the other hand, almost all drug research programs, or never even make it off the ground in the first place. In this field, actually getting a plane together, getting it into the air, and guiding it to a landing at the FDA only happens once in a rather long while, which is why there are plenty of people out there in early research who've never worked on anything that's made it to market. A list of all the programs that failed would be instructive, and might get across how difficult finding a drug really is, but no one's going to be able to put one of those together. Companies don't even announce the vast majority of their preclinical failures; they're below everyone else's limit of detection. I can tell you for sure that most of the non-delivering programs I've worked on have never seen daylight of any sort. They just quietly disappeared.

Comments (10) + TrackBacks (0) | Category: Cancer | Drug Development | Drug Industry History

September 24, 2012

The One-Stop CRO

Email This Entry

Posted by Derek

C&E News has a good articlehttp://cen.acs.org/articles/90/i39/One-Stop-Shops-Emerge-Drug.html out on the so-called "one-stop shop" contract research organizations in pharma - these are the Covances and WuXis of the world, who can take on all sorts of preclinical (and clinical) jobs for you under one umbrella.

The old debate over one-stop shopping has, however, become more nuanced in the current pharmaceutical industry environment. Service firms and their customers agree that much of the decision making comes down to where to outsource workhorse chemistry and where to outsource frontline science. Sources agree that a market still exists for boutique CROs that focus on one node along the discovery/development continuum. And some drug firms say they are working with more than one full-service vendor, negating the supposed advantage of one-stop shopping.


There's more of this sort of thing around than ever, of course, but the merits of the whole idea are still being debated. There's no questions that these companies can extend the reach of an organization that doesn't have all these specialities itself, but that doesn't mean that you can't mess things up, either.

Not every drug firm is scaling down internal research. Sonia Pawlak, manager of strategic outsourcing in chemical development at Gilead Sciences, says drug companies with fully developed R&D operations will likely not see much advantage in working with a one-stop-shop contractor. . .Geographical proximity to a supplier is important to Gilead, Pawlak adds, questioning whether linking research and manurfacturing assets across different continents saves the customer time.

I'm used to looking at these companies from the buying end. When you consider the whole CRO world from the other direction, though, you see a vision of de-risked pharma. These people are going to get paid, whether the preclinical program works out or not, whether the clinical trials work or not, whether the eventual drug is approved or not. It's a contract business.

But they're also never going to get paid more than what is in that contract - they will share in no windfalls, get pieces of no blockbusters. So eventually, you end up with two halves of the whole drug R&D business: a drug company that does little or no outsourcing (along with the small R&D discovery companies that outsource everything they can) are in the part that takes the big risks and goes for the big victories, while the CROs are the part that takes on (comparatively) no risk in exchange for a smaller guaranteed payout.

Comments (11) + TrackBacks (0) | Category: Drug Development

September 21, 2012

Transcelerate: What Is It, Exactly?

Email This Entry

Posted by Derek

A list of big pharma companies have announced that they're setting up a joint venture, Trancelerate, to try to address common precompetitive drug development problems. But that covers a broad area, and this collaboration is more narrowly focused:

Members of TransCelerate have identified clinical study execution as the initiative's initial area of focus. Five projects have been selected by the group for funding and development, including: development of a shared user interface for investigator site portals; mutual recognition of study site qualification and training; development of risk-based site monitoring approach and standards; development of clinical data standards; and establishment of a comparator drug supply model.

Now, that paragraph is hard to get through, I have to say. I understand what they're getting at, and these are all worthy objectives, but I think it could be boiled down to saying "We're going to try not to duplicate each other's work so much when we're setting up clinical trials and finding places to run them. They cost so much already that it's silly for us all to spend money doing the same things that have to be done every time." And other than this, details are few. The initiative will be headquartered in Philadelphia, but that seems to be about it so far.

But this it won't get at the fundamental problems in drug research. Our clinical failure rate of around 90% has very little to do with the factors that Transcelerate is addressing - what they're trying to do is make that failure rate less of a financial burden. That's certainly worth taking on, in lieu of figuring out why our drugs crash and burn so often. That one is a much tougher problem, easily proven by the fact that there are billions of dollars waiting to be picked up for even partial solutions to it.

Comments (18) + TrackBacks (0) | Category: Clinical Trials | Drug Development

September 18, 2012

Going After the Big Cyclic Ones

Email This Entry

Posted by Derek

I wrote last year about macrocyclic compounds and their potential as drugs. Now BioCentury has a review of the companies working in this area, and there are more of them than I thought. Ensemble and Aileron are two that come to mind (if you count "stapled peptides" as macrocycles, and I think they should). But there are also Bicycle, Encycle, Lanthio, Oncodesign, Pepscan, PeptiDream, Polyphor, Protagonist, and Transzyme. These companies have a lot of different approaches. Many of them (but not all) are using cyclic peptides, but there are different ways of linking these, different sorts of amino acids you can use in them, and so on. And the non-peptidic approaches have an even wider variety. So I've no doubt that there's room in this area for all these companies - but I also have no doubt that not all these approaches are going to work equally well. And we're just barely getting to the outer fringes of sorting that out:

While much of the excitement over macrocycles is due to their potential to disrupt intracellular protein-protein interactions, every currently disclosed lead program in the space targets an extracellular protein. This reality reflects the challenge of developing a potent and cell-penetrant macrocyclic compound.

Tranzyme and Polyphor are the only companies with macrocyclic compounds in the clinic. Polyphor’s lead compound is POL6326, a conformationally constrained peptide that antagonizes CXC chemokine receptor 4 (CXCR4; NPY3R). It is in Phase II testing to treat multiple myeloma (MM) using autologous transplantation of hematopoietic stem cells.

Tranzyme’s lead compound is TZP-102, an orally administered ghrelin receptor agonist in Phase IIb testing to treat diabetic gastroparesis.

Two weeks ago, Aileron announced it hopes to start clinical development of its lead internally developed program in 2013. The compound, ALRN-5281, targets the growth hormone-releasing hormone (GHRH) receptor.

Early days, then. It's understandable that the first attempts in this area will come via extracellular-acting, iv-administered agents - those are the lowest bars to clear for a new technology. But if this area is going to live up to its potential, it'll have to go much further along than that. We're going to have to learn a lot more about cellular permeability, which is a very large side effect (a "positive externality", as the economists say) of pushing the frontiers back like this: you figure these things out because you have to.

Comments (9) + TrackBacks (0) | Category: Drug Development | Pharmacokinetics

September 10, 2012

Geron, And The Risk of Cancer Therapies

Email This Entry

Posted by Derek

Geron's telomerase inhibitor compound, imetalstat, showed a lot of interesting results in vitro, and has been in Phase II trials all this year. Until now. The company announced this morning that the interim results of their breast-cancer trial are so unpromising that it's been halted, and that lung cancer data aren't looking good, either. The company's stock has been cratering in premarket trading, and this stock analyst will now have some thinking to do, as will the people who followed his advice last week.

I'm sorry to see the first telomerase inhibitor perform so poorly; we need all the mechanisms we can get in oncology. And this is terrible news for Geron, since they'd put all their money down on this therapeutic area. But this is drug discovery; this is research: a lot of good, sensible, promising ideas just don't work.

That phrase comes to mind after reading this article from the Telegraph about some Swedish research into cancer therapy. It's written in a breathless style - here, see for yourself:

Yet as things stand, Ad5[CgA-E1A-miR122]PTD – to give it the full gush of its most up-to-date scientific name – is never going to be tested to see if it might also save humans. Since 2010 it has been kept in a bedsit-sized mini freezer in a busy lobby outside Prof Essand's office, gathering frost. ('Would you like to see?' He raises his laptop computer and turns, so its camera picks out a table-top Electrolux next to the lab's main corridor.)
Two hundred metres away is the Uppsala University Hospital, a European Centre of Excellence in Neuroendocrine Tumours. Patients fly in from all over the world to be seen here, especially from America, where treatment for certain types of cancer lags five years behind Europe. Yet even when these sufferers have nothing else to hope for, have only months left to live, wave platinum credit cards and are prepared to sign papers agreeing to try anything, to hell with the side-effects, the oncologists are not permitted – would find themselves behind bars if they tried – to race down the corridors and snatch the solution out of Prof Essand's freezer.

(By the way, does anyone have anything to substantiate that "five years behind Europe" claim? I don't.) To be sure, Prof. Essand tries to make plain to the reporter (Alexander Masters) that this viral therapy has only been tried in animals, that a lot of things work in animals that don't work in man, and so on. But given Masters' attitude towards medical research, there's only so much that you can do:

. . .Quacks provide a very useful service to medical tyros such as myself, because they read all the best journals the day they appear and by the end of the week have turned the results into potions and tinctures. It's like Tommy Lee Jones in Men in Black reading the National Enquirer to find out what aliens are up to, because that's the only paper trashy enough to print the truth. Keep an eye on what the quacks are saying, and you have an idea of what might be promising at the Wild West frontier of medicine. . .

I have to say, in my experience, that this is completely wrong. Keep an eye on what the quacks are saying, and you have an idea of what might have been popular in 1932. Or 1954. Quacks seize onto an idea and never, ever, let it go, despite any and all evidence, so quackery is an interminable museum of ancient junk. New junk is added all the time, though, one has to admit. You might get some cutting-edge science, if your idea of cutting-edge is an advertisement in one of those SkyMall catalogs you get on airplanes. A string of trendy buzzwords super-glued together does not tell you where science is heading.

But Masters means well with this piece. He wants to see Essend's therapy tried out in the clinic, and he wants to help raise money to do that (see the end of the article, which shows how to donate to a fund at Uppsala). I'm fine with that - as far as I can tell, longer shots than this one get into the clinic, so why not? But I'd warn people that their money, as with the rest of the money we put into this business, is very much at risk. If crowdsourcing can get some ideas a toehold in the clinical world, I'm all for it, but it would be a good thing in general if people realized the odds. It would also be a good idea if more people realized how much money would be needed later on, if things start to look promising. No one's going to crowdsource a Phase III trial, I think. . . .

Comments (12) + TrackBacks (0) | Category: Cancer | Clinical Trials | Drug Development

September 6, 2012

Accelerated Approval And Its Discontents

Email This Entry

Posted by Derek

This may sound a little odd coming from someone in the drug industry, but I have a lot of sympathy for the FDA. I'm not saying that I always agree with them, or that I think that they're doing exactly what we need them to do all the time. But I would hate to be the person that would have to decide how they should do things differently. And I think that no matter what, the agency is going to have a lot of people with reasons to complain.

These thoughts are prompted by this article in JAMA on whether or not drug safety is being compromised by the growing number of "Priority Review" drug approvals. There are three examples set out in detail: Caprelsa (vandetanib) for thyroid cancer, Gilenya (fingolimod) for multiple sclerosis, and the anticoagulant Pradaxa (dabigatran). In each of these accelerated cases, safety has turned out to be more of a concern than some people expected, and the authors of this paper are asking if the benefits have been worth the risks.

Pharmalot has a good summary of the paper, along with a reply from the FDA. Their position is that various forms of accelerated approval have been around for quite a few years now, and that the agency is committed to post-approval monitoring in these cases. What they don't say - but it is, I think, true - is that there is no way to have accelerated approvals without occasional compromises in drug safety. Can't be done. You have to try to balance these things on a drug-by-drug basis: how much the new medication might benefit people without other good options, versus how many people it might hurt instead. And those are very hard calls, which are made with less data than you would have under non-accelerated conditions. If these three examples are indeed problematic drugs that made it through the system, no one should be surprised at all. Given the number of accelerated reviews over the years, there have to be some like this. In fact, this goes to show you that the accelerated review process is not, in fact, a sham. If everything that passed through it turned out to be just as clean as things that went through the normal approval process, that would be convincing evidence that the whole thing was just window dressing.

If that's true - and as I said, I certainly believe it is - then the question is "Should there be such a thing as accelerated approval at all?" If you decide that the answer to that is "Yes", then the follow-up is "Is the risk-reward slider set to the right place, or are we letting a few too many things through?" This is the point the authors are making, I'd say, that the answer to that question is "Yes", and we need to move the settings back a bit. But here comes an even trickier question: if you do that, how far back do you go before the whole accelerated approval process is not worth the effort any more? (If you try to make it so that nothing problematic makes it through at all, you've certainly crossed into that territory, to my way of thinking). So if three recent examples like these represent an unacceptable number (and it may be), what is acceptable? Two? One? Those numbers, but over a longer period of time?

And if so, how are you going to do that without tugging on the other end of the process, helping patients who are waiting for new medications? No, these are very, very hard questions, and no matter how you answer them, someone will be angry with you. I have, as I say, a lot of sympathy for the FDA.

Comments (7) + TrackBacks (0) | Category: Drug Development | Regulatory Affairs | Toxicology

September 4, 2012

A New Malaria Compound

Email This Entry

Posted by Derek

There have been many headlines in recent days about a potential malaria cure. I'm not sure what set these off at this time, since the paper describing the work came out back in the spring, but it's certainly worth a look.

This all came out of the Medicines for Malaria Venture, a nonprofit group that has been working with various industrial and academic groups in many areas of malaria research. This is funded through a wide range of donors (corporations, foundations, international agencies), and work has taken place all over the world. In this case (PDF), things began with a collection of about 36,000 compounds (biased towards kinase inhibitor scaffolds) from BioFocus in the UK. These were screened (high-throughput phenotypic readout) at the Eskitis Institute in Australia, and a series of compounds was identified for structure-activity studies. This phase of the work was a three-way collaboration between a chemistry team at the University of Cape Town (led by Prof. Kelly Chibale), biology assay teams at the Swiss Tropical and Public Health Institute, and pharmacokinetics at the Center for Drug Candidate Optimization at Monash University in Australia.
MMV%20compound%2015.png
An extensive SAR workup on the lead series identified some metabolically labile parts of the molecule over on that left-hand side pyridine. These could fortunately be changed without impairing the efficacy against the malaria parasites. The sulfonyl group seems to be required, as does the aminopyridine. These efforts led to the compound shown, MMV390048, which has good blood levels, passes in vitro safety tests, and is curative in a Plasmodium berghei mouse model at a single dose of 30 mg/kg. That's a very promising compound, from the looks of it, since that's better than the existing antimalarials can do. It's also active against drug-resistant strains, as well it might be (see below). Last month the MMV selected it for clinical development.

So how does this compound work? The medicinal chemists in the audience will have looked at that structure and said "kinase inhibitor", and that has to be where to put your money. That, in fact, appears to have been the entire motivation to screen the BioFocus collection. Kinase targets in Plasmodium have been getting attention for several years now; the parasite has a number of enzymes in this class, and they're different enough from human kinases to make attractive targets. (To that point, I have not been able to find results of this latest compound's profile when run against a panel of human kinases, although you'd think that this has surely been done by now). Importantly, none of the existing antimalarials work through such mechanisms, so the parasites have not had a chance to work up any resistance.

But resistance will come. It always does. The best hope for the kinase-based inhibitors is that they'll hit several malaria enzymes at once, which gives the organisms a bigger evolutionary barrier to jump over. The question is whether you can do that without hitting anything bad in the human kinome, but for the relatively short duration of acute malaria treatment, you should be able to get away with quite a bit. Throwing this compound and the existing antimalarials at the parasites simultaneously will really give them something to occupy themselves.

I'll follow the development of this compound with interest. It's just about to hit the really hard part of drug research - human beings in the clinic. This is where we have our wonderful 90% or so failure rates, although those figures are generally better for anti-infectives, as far as I can tell. Best of luck to everyone involved. I hope it works.

Comments (27) + TrackBacks (0) | Category: Drug Development | Infectious Diseases

August 31, 2012

Eli Lilly's Drumbeat of Bad News

Email This Entry

Posted by Derek

Eli Lilly has been getting shelled with bad news recently. There was the not-that-encouraging-at-all failure of its Alzheimer's antibody solanezumab to meet any of its clinical endpoints. But that's the good news, since that (at least according to the company) it showed some signs of something in some patients.

We can't say that about pomaglumetad methionil (LY2140023), their metabotropic glutamate receptor ligand for schizophrenia, which is being halted. The first large trial of the compound failed to meet its endpoint, and an interim analysis showed that the drug was unlikely to have a chance of making its endpoints in the second trial. It will now disappear, as will the money spent on it so far. (The first drug project I ever worked on was a backup for an antipsychotic with a novel mechanism, which also failed to do a damned thing in the clinic, and which experience perhaps gave me some of the ideas I have now about drug research).

This compound is an oral prodrug of LY404039, which has a rather unusual structure. The New York Times did a story about the drug's development a few years ago, which honestly makes rather sad reading in light of the current news. It was once thought to have great promise. Note the cynical statement in that last link about how it really doesn't matter if the compound works or not - but you know what? It did matter in the end. This was the first compound of its type, an attempt at a real innovation through a new mechanism to treat mental illness, just the sort of thing that some people will tell you that the drug industry never gets around to doing.

And just to round things off, Lilly announced the results of a head-to-head trial of its anticoagulant drug Effient versus (now generic) Plavix in acute coronary syndrome. This is the sort of trial that critics of the drug industry keep saying never gets run, by the way. But this one was, because Plavix is the thing to beat in that field - and Effient didn't beat it, although there might have been an edge in long-term followup.

Anticoagulants are a tough field - there are a lot of patients, a lot of money to be made, and a lot of room (in theory) for improvement over the existing agents. But just beating heparin is hard enough, without the additional challenge of beating cheap Plavix. It's a large enough patient population, though, that more than one drug is needed because of different responses.

There have been a lot of critics of Lilly's research strategy over the years, and a lot of shareholders have been (and are) yelling for the CEO's head. But from where I sit, it looks like the company has been taking a lot of good shots. They've had a big push in Alzheimer's, for example. Their gamma-secretase inhibitor, which failed in terrible fashion, was a first of its kind. Someone had to be the first to try this mechanism out; it's been a goal of Alzheimer's research for over twenty years now. Solanezumab was a tougher call, given the difficulties that Elan (and Wyeth/Pfizer, J&J, and so on) have had with that approach over the years. But immunology is a black box, different antibodies do different things in different people, and Lilly's not the only company trying the same thing. And they've been doggedly pursuing beta-secretase as well. These, like them or not, are still some of the best ideas that anyone has for Alzheimer's therapy. And any kind of win in that area would be a huge event - I think that Lilly deserves credit for having the nerve to go after such a tough area, because I can tell you that I've been avoiding it ever since I worked on it in the 1990s.

But what would I have spent the money on instead? It's not like there are any low-risk ideas crowding each other for attention. Lilly's portfolio is not a crazy or stupid one - it's not all wild ideas, but it's not all full of attempts to play it safe, either. It looks like the sort of thing any big (and highly competent) drug research organization could have ended up with. The odds are still very much against any drug making it through the clinic, which means that having three (or four, or five) in a row go bad on you is not an unusual event at all. Just a horribly unprofitable one.

Comments (26) + TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System

August 29, 2012

How Did the Big Deals of 2007 Work Out?

Email This Entry

Posted by Derek

Startup biopharma companies: they've gotta raise money, right? And the more money, the better, right? Not so right, according to this post by venture capitalist Bruce Booth. Companies need money, for sure, but above a certain threshold there's no correlation with success, either for the company's research portfolio or its early stage investors. (I might add that the same holds true for larger drug companies as well, for somewhat different reasons. Perhaps Pfizer's strategy over the last twenty years has had one (and maybe only one) net positive effect: it's proven that you cannot humungous your way to success in this business. And yes, since you ask, that's the last time I plan to use "humungous" as a verb for a while).

There's also a fascinating look back at FierceBiotech's 2007 "Top Deals", to see what became of the ten largest financing rounds on the list. Some of them have worked out, and some of them most definitely haven't: 4 of the ten were near-total losses. One's around break-even, two are "works in progress" but could come through, and three have provided at least 2x returns. (Read his post to attach names to these!) And as Booth shows, that's pretty much what you'd expect from the distribution over the entire biotech industry, including all the wild-eyed stuff and the riskiest small fry. Going with the biggest, most lucratively financed companies bought you, in this case, no extra security at all.

A note about those returns: one of the winners on the list is described as having paid out "modest 2x returns" to the investors. That's the sort of quote that inspires outrage among the clueless, because (of course) a 100% profit is rather above the market returns for the last five years. But the risk/reward ratio has not been repealed. You could have gotten those market returns by doing nothing, just by parking the cash in a couple of index funds and sitting back. Investing in startup companies requires a lot more work, because you're taking on a lot more risk.

It was not clear which of those ten big deals in 2007 would pay out, to put it mildly. In fact, if you take Booth's figures so far, an equal investment in each of the top seven companies on the list in 2007 would leave you looking at a slight net loss to date, and that includes one company that would have paid you back at about 3x to 4x. Number eight was the big winner on the list (5x, if you got out at the perfect peak, and good luck with that), and number 9 is the 2x return (while #10 is ongoing, but a likely loss). As any venture investor knows, you're looking at a significant risk of losing your entire investment whenever you back a startup, so you'd better (a) back more than one and (b) do an awful lot of thinking about which ones those are. This is a job for the deeply pocketed.

And when you think about it, a very similar situation obtains inside a given drug company. The big difference is that you don't have the option of not playing the game - something always has to be done. There are always projects going, some of which look more promising than others, some of which will cost more to prosecute than others, and some of which are aimed at different markets than others. You might be in a situation where there are several that look like they could be taken on, but your development organization can't handle so many. What to do? Partner something, park something that can wait (if anything can)?Or you might have the reverse problem, of not enough programs that look like they might work. Do you push the best of a bad lot forward and hope for the best? If not, do you still pay your development people even if they have nothing to develop right now, in the hopes that they soon will?

Which of these clinical programs of yours have the most risk? The biggest potential? Have you balanced those properly? You're sure to lose your entire investment on the majority - the great majority - of them, so choose as wisely as you can. The ones that make it through are going to have to pay for all the others, because if they don't, everyone's out of a job.

This whole process, of accumulating capital and risking it on new ventures, is important enough that we've named an entire economic system for it. It's a high-wire act. Too cautious, and you might not keep up enough to survive. Too risky, and you could lose too much. They do focus one's attention, such prospects, and the thought that other companies are out there trying to get a step on you helps keep you moving, too. It's not a pretty system, but it isn't supposed to be. It's supposed to work.

Comments (1) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

August 23, 2012

Pharma: Geniuses or Con Men?

Email This Entry

Posted by Derek

So here's a comment to this morning's post on stock buybacks, referring both to it and my replies to Donald Light et al. last week. I've added links:

Did you not spend two entire posts last week telling readers how only pharma "knows" how to do drug research and that we should "trust" them and their business model. Now you seem to say that they are either incompetent or conmen looking for a quick buck. So what is it? Does pharma (as it exists today) have a good business model or are they conmen/charlatans out for money? Do they "know" what they are doing? Or are they faking competence?

False dichotomy. My posts on the Donald Light business were mostly to demonstrate that his ideas of how the drug industry works are wrong. I was not trying to prove that the industry itself is doing everything right.

That's because it most certainly isn't. But it is the only biopharma industry we have, and before someone comes along with a scheme to completely rework it, one should ask whether that's a good idea. In this very context, the following quote from Chesterton has been brought up, and it's very much worth keeping in mind:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."

This paradox rests on the most elementary common sense. The gate or fence did not grow there. It was not set up by somnambulists who built it in their sleep. It is highly improbable that it was put there by escaped lunatics who were for some reason loose in the street. Some person had some reason for thinking it would be a good thing for somebody. And until we know what the reason was, we really cannot judge whether the reason was reasonable. It is extremely probable that we have overlooked some whole aspect of the question, if something set up by human beings like ourselves seems to be entirely meaningless and mysterious. There are reformers who get over this difficulty by assuming that all their fathers were fools; but if that be so, we can only say that folly appears to be a hereditary disease. But the truth is that nobody has any business to destroy a social institution until he has really seen it as an historical institution. If he knows how it arose, and what purposes it was supposed to serve, he may really be able to say that they were bad purposes, that they have since become bad purposes, or that they are purposes which are no longer served. But if he simply stares at the thing as a senseless monstrosity that has somehow sprung up in his path, it is he and not the traditionalist who is suffering from an illusion.

The drug industry did not arise out of random processes; it looks the way it does now because of a long, long series of decisions. Because we live in a capitalist system, many of these decisions were made to answer the question "Which way would make more money?" That is not guaranteed to give you the best outcome. But neither is it, as some people seem to think, a guarantee of the worst one. Insofar as the need for new and effective drugs is coupled to the ability to make money by doing so, I think the engine works about as well as anything could. Where these interests decouple (tropical diseases, for one), we need some other means.

My problem with stock buybacks is that I think that executives are looking at that same question ("Which way would make more money?") and answering it incorrectly. But under current market conditions, there are many values of "wrong". In the long run, I think (as does Bruce Booth) that it would be more profitable, both for individual companies and for the industry as a whole, to invest more in research. In fact, I think that's the only thing that's going to get us out of the problems that we're in. We need to have more reliable, less expensive ways to discover and develop drugs, and if we're not going to find those by doing research on how to make them happen, then we must be waiting for aliens to land and tell us.

But that long run is uncertain, and may well be too long for many investors. Telling the shareholders that Eventually Things Will Be Better, We Think, Although We're Not Sure How Just Yet will not reassure them, especially in this market. Buying back shares, on the other hand, will.

Comments (22) + TrackBacks (0) | Category: Business and Markets | Drug Development

August 21, 2012

Genentech's Big Worry: Roche?

Email This Entry

Posted by Derek

There's no telling if this is true - it's part of a lawsuit. But a former Genentech employee is claiming that the company rushed trials of its PI3K inhibitor. And why? Worries about their partner:

The suit alleges that the Pi3 Kinase team was guilty of "illegal and unethical conduct" by skirting established scientific and ethical standards required of drug researchers. Juliet Kniley claims she complained in 2008 and then was sidelined in 2009 with a demotion after being instructed to push ahead on the study. And she says she was told twice that Roche would "take this molecule away from us" if they saw her proposed timelines.

Genentech denies the allegations. But you have to wonder if there's still a window here into the relationship between the two companies. . .

Comments (24) + TrackBacks (0) | Category: Drug Development

August 17, 2012

Good Forum for a Response on Drug Innovation?

Email This Entry

Posted by Derek

I wanted to mention that a version of my first post on the Light/Lexchin article is now up over at the Discover magazine site. And if you've been following the comments to that one and to Light's response here, you'll note that readers here have found a number of problems with the original paper's analysis. I've found a few of my own, and I expect there are more.

The British Medical Journal has advised me that they consider a letter to the editor to be the appropriate forum for a response to one of their published articles. I don't think publishing this one did them much credit, but what's done is done. I'm still shopping for a venue for a detailed response on my part - I've had a couple of much-appreciated offers, but I'd like to continue to see what options are out there to get this out to the widest possible audience.

Comments (23) + TrackBacks (0) | Category: Drug Development | Drug Prices

August 15, 2012

A Quick Tour Through Drug Development Reality

Email This Entry

Posted by Derek

I wanted to let people know that I'm working on a long, detailed reply to Donald Light's take on drug research, but that I'm also looking at a few other publication venues for it. More on this as it develops.

But in trying to understand his worldview (and Marcia Angell's, et al.), I think I've hit on at least one fundamental misconception that these people have. All of them seem to think that the key step in drug discovery is target ID - once you've got a molecular target, you're pretty much home free, and all that was done by NIH money, etc., etc. It seems that these people have a very odd idea about high-throughput screening: they seem to think that we screen our vast collections of molecules and out pops a drug.

Of course, out is what a drug does not pop, if you follow my meaning. What pops out are hits, some of which are not what they say on the label any more. And some of the remaining ones just don't reproduce when you run the same experiment again. And even some of the ones that do reproduce are showing up as hits not because they're affecting your target, but because they're hosing up your assay by some other means. Once you've cleared all that underbrush out, you can start to talk about leads.

Those lead molecules are not created equal, either. Some of them are more potent than others, but the more potent ones might be much higher molecular weights (and thus not as ligand efficient). Or they might be compounds from another project and already known to hit a target that you don't want to hit. Once you pick out the ones that you actually want to do some chemistry on, you may find, as you start to test new molecules in the series, that some of them have more tractable structure-activity relationships than others. There are singletons out there, or near-singletons: compounds that have some activity as they stand, but for which every change in structure represents a step down. The only way to find that out is to test analogs. You might have some more in your files, or you might be able to buy some from the catalogs. But in many cases, you'll have to make them yourself, and a significant number of those compounds you make will be dead ends. You need to know which ones, though, so that's valuable information.

Now you're all the way up to lead series territory, a set of compounds that look like they can be progressed to be more potent and more selective. As medicinal chemists know, though, there's more to life. You need to see how these compounds act on real cells, and in real animals. Do they attain reasonable blood levels? Why or why not? What kinds of metabolites do they produce - are those going to cause trouble? What sort of toxicity do you see at higher doses, or more long-running ones? Is that related to your mechanism of action (sorry to hear it!), or something off-target to do with that particular structure? Can you work your way out of that problem with more new compound variations without losing all of what you've been building in so far? Prepare to go merrily chasing down some blind alleys while you work all this stuff out; the lights are turned off inside the whole maze, and the only illumination is what you can bring yourself.

Now let's assume that you've made it far enough to narrow down to one single compound, the clinical candidate. The fun begins! How about formulations - can this compound be whipped up into a solid form that resembles a real drug that people can put in their mouths, leave on their medicine cabinet shelves, and stock in their warehouses and pharmacies? Can you make enough of the compound to get to that stage, reliably? Most of the time the chemistry has to change at that point, and you'd better hope that some tiny new impurities from the new route aren't going to pop up and be important. You'd really better hope that some new solid form (polymorph) of your substance doesn't get discovered during that new route, because some of those are bricks and their advent is nearly impossible to predict.

Hey, now it's time to go to the clinic. Break out the checkbook, because the money spent here is going to make the preclinical expenses look like roundoff errors. Real human beings are going to take your compound, and guess what? Of all the compounds (the few, the proud) that actually get this far, all the way up to some volunteer's tongue. . .well, a bit over ninety per cent of those are going to fail in trials. Good luck!

While you're nervously checking the clinical results (blood levels and tolerability in Phase I), you have more questions to ask. Do you have good commercial suppliers for all the starting materials, and the right manufacturing processes in place to make the drug, formulate it, and package it? High time you thought about that stuff; your compound is about to go into the first sick humans it's ever seen, in Phase II. You finally get to find out if that target, that mechanism, actually works in people. And if it does (congratulations!), then comes the prize. You get to spend the real money in Phase III: lots and lots of patients, all sorts of patients, in what's supposed to be a real-world shakedown. Prepare to shell out more than you've spent in the whole process to date, because Phase III trials will empty your pockets for sure.

Is your compound one of the five or ten out of a hundred that makes it through Phase III? Enjoy the sensation, because most medicinal chemists experience that only once in their careers, if that. Now you're only a year or two away from getting your drug through the FDA and seeing if it will succeed or fail on the market. And good luck there, too. Contrary to what you might read, not all drugs earn back their costs, so the ones that do had better earn out big-time.

There. That wasn't so easy, was it? And I know that I've left things out, too. The point of all this is that most people have no idea of all these steps - what they're like, how long they can take, that they even exist. It wouldn't surprise me if many people imagine drug discovery, when they imagine it at all, to be the reach-in-the-pile-and-find-a-drug process that I mentioned in the second paragraph. Everything else is figuring out what color to make the package and how much to overcharge for it.

That's why I started this blog back in 2002 - because I was spending all my time on a fascinating, tricky, important job that no one seemed to know anything about. All these details consume the lives and careers of vast numbers of researchers - it's what I've been doing since 1989 - and I wanted, still want, to let people know that we exist.

In the meantime, for the Donald Lights of the world, the Marcia Angells, and the people who repeat their numbers despite apparently knowing nothing about how drugs actually get developed - well, here are some more details for you. The readers of this site with experience in the field will be able to tell you if I haven't described it pretty much as it is. It's not like I and others haven't tried to tell you before.

Comments (60) + TrackBacks (0) | Category: Drug Development | Drug Prices

August 13, 2012

Donald Light Responds on Drug Innovation and Costs

Email This Entry

Posted by Derek

Here's a response from Prof. Light to my post the other day attacking his positions on drug research. I've taken it out of that comments thread to highlight it - he no longer has to wonder if I'll let people here read what he has to say.

I'll have a response as well, but that'll most likely be up tomorrow - I actually have a very busy day ahead of me in the lab, working on a target that (as far as any of us in my group can tell) no one has ever attacked, for a disease that (as far as any of us in my group can tell) no one has ever found a therapy. And no, I am not making that up.

It's hard to respond to so many sarcastic and baiting trashings by Dr. Lowe and some of his fan club, but let me try. I wonder if Dr. Lowe allows his followers to read what I write here without cutting and editing.

First, let me clarify some of the mis-representations about the new BMJ article that claims the innovation crisis is a myth. While the pharmaceutical industry and its global network of journalists have been writing that the industry has been in real trouble because innovation has been dropping, all those articles and figures are based on the decline of new molecules approved since a sharp spike. FDA figures make it clear that the so-called crisis has been simply a return to the long-term average. In fact, in recent years, companies have been getting above-average approvals for new molecules. Is there any reasonably argument with these FDA figures? I see none from Dr. Lowe or in the 15 pages of comments.

Second, the reported costs of R&D have been rising sharply, and we do not go into these; but here are a couple of points. We note that the big picture, total additional investments in R&D (which are self-reported from closely held figures) over the past 15 years were matched by six times greater increase in revenues. We can all guess various reasons why, but surely a 6-fold return is not a crisis or "unsustainable." In fact, it's evidence that companies know what they are doing.

Another point from international observers is that the costs of clinical trials in the U.S. are much higher than in equally affluent countries and much higher than they need to be, because everyone seems to make money the higher they are in the U.S. market. I have not looked into this but I think it would be interesting to see in what ways costly clinical trials are a boon for several of the stakeholders.

Third, regarding that infamously low cost of R&D that Dr. Lowe and readers like to slam, consider this: The low estimate is based on the same costs of R&D reported by companies (which are self-reported from closely held figures) to their leading policy research center as were used to estimate the average cost is $1.3 bn (and soon to be raised again). Doesn't that make you curious enough to want to find out how we show what inflators were used to ramp the reported costs up, which use to do the same in reverse? Would it be unfair to ask you to actually read how we took this inflationary estimate apart? Or is it easier just to say our estimate is "idiotic" and "absurd"? How about reading the whole argument at www.pharmamyths.net and then discuss its merits?

Our estimate is for net, median corporate cost of D(evelopment) for that same of drugs from the 1990s that the health economists supported by the industry used to ramp up the high estimate. Net, because taxpayer subsidies which the industry has fought hard to expand pay for about 44% of gross R&D costs. Median, because a few costly cases which are always featured raise the average artificially. Corporate, because a lot of R(eseach) and some D is paid for by others "“ governments, foundations, institutes. We don't include an estimate for R(eseach) because no one knows what it is and it varies so much from a chance discovery that costs almost nothing to years and decades of research, failures, dead ends, new angles, before finally an effective drug is discovered.

So it's an unknown and highly variable R plus more knowable estimate of net, median, corporate costs. Even then, companies never so show their books, and they never compare their costs of R&D to revenues and profits. They just keep telling us their unverifiable costs of R&D are astronomical.

We make clear that neither we nor anyone else knows either the average gross cost or the net, median costs of R&D because major companies have made sure we cannot. Further, the "average cost of R&D" estimate began in 1976 as a lobbying strategy to come up with an artificial number that could be used to wow Congressmen. It's worked wonderfully, mythic as it may be.

Current layoffs need to be considered (as do most things) from a 10-year perspective. A lot industry observers have commented on companies being "bloated" and adding too many hires. Besides trimming back to earlier numbers, the big companies increasingly realize (it has taken them years) that it's smarter to let thousands of biotechs and research teams try to find good new drugs, rather than doing it in-house. To regard those layoffs as an abandonment of research misconstrues the corporate strategies.

Fourth, we never use "me-too." We speak of minor variations, and we say it's clinically valuable to have 3-4 in a given therapeutic class, but marginal gains fall quite low after that.

Fifth, our main point about innovation is that current criteria for approval and incentives strongly reward companies doing exactly what they are doing, developing scores of minor variations to fill their sales lines and market for good profits. We don't see any conspiracy here, only rational economic behavior by smart businessmen.

But while all new drug products are better than placebo or not too worse than a comparator, often against surrogate end points, most of those prove to be little better than last year's "better" drugs, or the years before"¦ You can read detailed assessments by independent teams at several sites. Of course companies are delighted when new drugs are really better against clinical outcomes; but meantime we cite evidence that 80 percent of additional pharmaceutical costs go to buying newly patented minor variations. The rewards to do anything to get another cancer drug approved are so great that independent reviewers find few of them help patients much, and the area is corrupted by conflict-of-interest marketing.

So we conclude there is a "hidden business model" behind the much touted business model, to spend billions on R&D to discover breakthrough drugs that greatly improve health and works fine until the "patent cliff" sends the company crashing to the canyon floor. The heroic tale is true to some extent and sometimes; but the hidden business model is to develop minor variations and make solid profits from them. That sounds like rational economic behavior to me.
The trouble is, all these drugs are under-tested for risks of harm, and all drugs are toxic to one degree or another. My book, The Risks of Prescription Drugs, assembles evidence that there is an epidemic of harmful side effects, largely from hundreds of drugs with few or no advantages to offset their risks of harm.

Is that what we want? My neighbors want clinically better drugs. They think the FDA approves clinically better drugs and don't realize that's far from the case. Most folks think "innovation" means clinically superior, but it doesn't. Most new molecules do not prove to be clinically superior. The term "innovation" is used vaguely to signal better drugs for patients; but while many new drugs are technically innovative, they do not help patients much. The false rhetoric of "innovative" and "innovation" needs to be replaced by what we want and mean: "clinically superior drugs."

If we want clinically better drugs, why don't we ask for them and pay according to added value "“ no more if no better and a lot more if substantially better? Instead, standards for testing effectiveness and risk of harms is being lowered, and "“ guess what "“ that will reward still more minor variations by rational economic executives, not more truly superior "innovative" drugs.

I hope you find some of these points worthwhile and interesting. I'm trying to reply to 20 single-space pages of largely inaccurate criticism, often with no reasoned explanation for a given slur or dismissal. I hope we can do better than that. I thought the comments by Matt #27 and John Wayne #45 were particularly interesting.

Donald W. Light

Comments (71) + TrackBacks (0) | Category: "Me Too" Drugs | Drug Development | Drug Prices

August 9, 2012

Getting Drug Research Really, Really Wrong

Email This Entry

Posted by Derek

The British Medical Journal says that the "widely touted innovation crisis in pharmaceuticals is a myth". The British Medical Journal is wrong.

There, that's about as direct as I can make it. But allow me to go into more detail, because that's not the the only thing they're wrong about. This is a new article entitled "Pharmaceutical research and development: what do we get for all that money?", and it's by Joel Lexchin (York University) and Donald Light of UMDNJ. And that last name should be enough to tell you where this is all coming from, because Prof. Light is the man who's publicly attached his name to an estimate that developing a new drug costs about $43 million dollars.

I'm generally careful, when I bring up that figure around people who actually develop drugs, not to do so when they're in the middle of drinking coffee or working with anything fragile, because it always provokes startled expressions and sudden laughter. These posts go into some detail about how ludicrous that number is, but for now, I'll just note that it's hard to see how anyone who seriously advances that estimate can be taken seriously. But here we are again.

Light and Lexchin's article makes much of Bernard Munos' work (which we talked about here), which shows a relatively constant rate of new drug discovery. They should go back and look at his graph, because they might notice that the slope of the line in recent years has not kept up with the historical rate. And they completely leave out one of the other key points that Munos makes: that even if the rate of discovery were to have remained linear, the costs associated with it sure as hell haven't. No, it's all a conspiracy:

"Meanwhile, telling "innovation crisis" stories to politicians and the press serves as a ploy, a strategy to attract a range of government protections from free market, generic competition."

Ah, that must be why the industry has laid off thousands and thousands of people over the last few years: it's all a ploy to gain sympathy. We tell everyone else how hard it is to discover drugs, but when we're sure that there are no reporters or politicians around, we high-five each other at how successful our deception has been. Because that's our secret, according to Light and Lexchin. It's apparently not any harder to find something new and worthwhile, but we'd rather just sit on our rears and crank out "me-too" medications for the big bucks:

"This is the real innovation crisis: pharmaceutical research and development turns out mostly minor variations on existing drugs, and most new drugs are not superior on clinical measures. Although a steady stream of significantly superior drugs enlarges the medicine chest from which millions benefit, medicines have also produced an epidemic of serious adverse reactions that have added to national healthcare costs".

So let me get this straight: according to these folks, we mostly just make "minor variations", but the few really new drugs that come out aren't so great either, because of their "epidemic" of serious side effects. Let me advance an alternate set of explanations, one that I call, for lack of a better word, "reality". For one thing, "me-too" drugs are not identical, and their benefits are often overlooked by people who do not understand medicine. There are overcrowded therapeutic areas, but they're not common. The reason that some new drugs make only small advances on existing therapies is not because we like it that way, and it's especially not because we planned it that way. This happens because we try to make big advances, and we fail. Then we take what we can get.

No therapeutic area illustrates this better than oncology. Every new target in that field has come in with high hopes that this time we'll have something that really does the job. Angiogenesis inhibitors. Kinase inhibitors. Cell cycle disruptors. Microtubules, proteosomes, apoptosis, DNA repair, metabolic disruption of the Warburg effect. It goes on and on and on, and you know what? None of them work as well as we want them to. We take them into the clinic, give them to terrified people who have little hope left, and we watch as we provide with them, what? A few months of extra life? Was that what we were shooting for all along, do we grin and shake each others' hands when the results come in? "Another incremental advance! Rock and roll!"

Of course not. We're disappointed, and we're pissed off. But we don't know enough about cancer (yet) to do better, and cancer turns out to be a very hard condition to treat. It should also be noted that the financial incentives are there to discover something that really does pull people back from the edge of the grave, so you'd think that we money-grubbing, public-deceiving, expense-padding mercenaries might be attracted by that prospect. Apparently not.

The same goes for Alzheimer's disease. Just how much money has the industry spent over the last quarter of a century on Alzheimer's? I worked on it twenty years ago, and God knows that never came to anything. Look at the steady march, march, march of failure in the clinic - and keep in mind that these failures tend to come late in the game, during Phase III, and if you suggest to anyone in the business that you can run an Alzheimer's Phase III program and bring the whole thing in for $43 million dollars, you'll be invited to stop wasting everyone's time. Bapineuzumab's trials have surely cost several times that, and Pfizer/J&J are still pressing on. And before that you had Elan working on active immunization, which is still going on, and you have Lilly's other antibody, which is still going on, and Genentech's (which is still going on). No one has high hopes for any of these, but we're still burning piles of money to try to find something. And what about the secretase inhibitors? How much time and effort has gone into beta- and gamma-secretase? What did the folks at Lilly think when they took their inhibitor way into Phase III only to find out that it made Alzheimer's slightly worse instead of helping anyone? Didn't they realize that Professors Light and Lexchin were on to them? That they'd seen through the veil and figured out the real strategy of making tiny improvements on the existing drugs that attack the causes of Alzheimer's? What existing drugs to target the causes of Alzheimer are they talking about?

Honestly, I have trouble writing about this sort of thing, because I get too furious to be coherent. I've been doing this sort of work since 1989, and I have spent the great majority of my time working on diseases for which no good therapies existed. The rest of the time has been spent on new mechanisms, new classes of drugs that should (or should have) worked differently than the existing therapies. I cannot recall a time when I have worked on a real "me-too" drug of the sort of that Light and Lexchin seem to think the industry spends all its time on.

That's because of yet another factor they have not considered: simultaneous development. Take a look at that paragraph above, where I mentioned all those Alzheimer's therapies. Let's be wildly, crazily optimistic and pretend that bapineuzumab manages to eke out some sort of efficacy against Alzheimer's (which, by the way, would put it right into that "no real medical advance" category that Light and Lexchin make so much of). And let's throw caution out the third-floor window and pretend that Lilly's solanezumab actually does something, too. Not much - there's a limit to how optimistic a person can be without pharmacological assistance - but something, some actual efficacy. Now here's what you have to remember: according to people like the authors of this article, whichever of these antibodies that makes it though second is a "me-too" drug that offers only an incremental advance, if anything. Even though all this Alzheimer's work was started on a risk basis, in several different companies, with different antibodies developed in different ways, with no clue as to who (if anyone) might come out on top.

All right, now we get to another topic that articles like this latest one are simply not complete without. That's right, say it together: "Drug companies spend a lot more on marketing than they do on research!" Let's ignore, for the sake of argument, the large number of smaller companies that spend all of their money on R&D and none on marketing, because they have nothing to market yet. Let's even ignore the fact that over the years, the percentage of money being spent on drug R&D has actually been going up. No, let's instead go over this in a way that even professors at UMDNJ and York can understand it:

Company X spends, let's say, $10 a year on research. (We're lopping off a lot of zeros to make this easier). It has no revenues from selling drugs yet, and is burning through its cash while it tries to get its first on onto the market. It succeeds, and the new drug will bring in $100 dollars a year for the first two or three years, before the competition catches up with some of the incremental me-toos that everyone will switch to for mysterious reasons that apparently have nothing to do with anything working better. But I digress; let's get back to the key point. That $100 a year figure assumes that the company spends $30 a year on marketing (advertising, promotion, patient awareness, brand-building, all that stuff). If the company does not spend all that time and effort, the new drug will only bring in $60 a year, but that's pure profit. (We're going to ignore all the other costs, assuming that they're the same between the two cases).

So the company can bring in $60 dollars a year by doing no promotion, or it can bring in $70 a year after accounting for the expenses of marketing. The company will, of course, choose the latter. "But," you're saying, "what if all that marketing expense doesn't raise sales from $60 up to $100 a year?" Ah, then you are doing it wrong. The whole point, the raison d'etre of the marketing department is to bring in more money than they are spending. Marketing deals with the profitable side of the business; their job is to maximize those profits. If they spend more than those extra profits, well, it's time to fire them, isn't it?

R&D, on the other hand, is not the profitable side of the business. Far from it. We are black holes of finance: huge sums of money spiral in beyond our event horizons, emitting piteous cries and futile streams of braking radiation, and are never seen again. The point is, these are totally different parts of the company, doing totally different things. Complaining that the marketing budget is bigger than the R&D budget is like complaining that a car's passenger compartment is bigger than its gas tank, or that a ship's sail is bigger than its rudder.

OK, I've spend about enough time on this for one morning; I feel like I need a shower. Let's get on to the part where Light and Lexchin recommend what we should all be doing instead:

What can be done to change the business model of the pharmaceutical industry to focus on more cost effective, safer medicines? The first step should be to stop approving so many new drugs of little therapeutic value. . .We should also fully fund the EMA and other regulatory agencies with public funds, rather than relying on industry generated user fees, to end industry’s capture of its regulator. Finally, we should consider new ways of rewarding innovation directly, such as through the large cash prizes envisioned in US Senate Bill 1137, rather than through the high prices generated by patent protection. The bill proposes the collection of several billion dollars a year from all federal and non-federal health reimbursement and insurance programmes, and a committee would award prizes in proportion to how well new drugs fulfilled unmet clinical needs and constituted real therapeutic gains. Without patents new drugs are immediately open to generic competition, lowering prices, while at the same time innovators are rewarded quickly to innovate again. This approach would save countries billions in healthcare costs and produce real gains in people’s health.

One problem I have with this is that the health insurance industry would probably object to having "several billion dollars a year" collected from it. And that "several" would not mean "two or three", for sure. But even if we extract that cash somehow - an extraction that would surely raise health insurance costs as it got passed along - we now find ourselves depending on a committee that will determine the worth of each new drug. Will these people determine that when the drug is approved, or will they need to wait a few years to see how it does in the real world? If the drug under- or overperforms, does the reward get adjusted accordingly? How, exactly, do we decide how much a diabetes drug is worth compared to one for multiple sclerosis, or TB? What about a drug that doesn't help many people, but helps them tremendously, versus a drug that's taken by a lot of people, but has only milder improvements for them? What if a drug is worth a lot more to people in one demographic versus another? And what happens as various advocacy groups lobby to get their diseases moved further up the list of important ones that deserve higher prizes and more incentives?

These will have to be some very, very wise and prudent people on this committee. You certainly wouldn't want anyone who's ever been involved with the drug industry on there, no indeed. And you wouldn't want any politicians - why, they might use that influential position to do who knows what. No, you'd want honest, intelligent, reliable people, who know a tremendous amount about medical care and pharmaceuticals, but have no financial or personal interests involved. I'm sure there are plenty of them out there, somewhere. And when we find them, why stop with drugs? Why not set up committees to determine the true worth of the other vital things that people in this country need each day - food, transportation, consumer goods? Surely this model can be extended; it all sounds so rational. I doubt if anything like it has ever been tried before, and it's certainly a lot better than the grubby business of deciding prices and values based on what people will pay for things (what do they know, anyway, compared to a panel of dispassionate experts?)

Enough. I should mention that when Prof. Light's earlier figure for drug expense came out that I had a brief correspondence with him, and I invited him to come to this site and try out his reasoning on people who develop drugs for a living. Communication seemed to dry up after that, I have to report. But that offer is still open. Reading his publications makes me think that he (and his co-authors) have never actually spoken with anyone who does this work or has any actual experience with it. Come on down, I say! We're real people, just like you. OK, we're more evil, fine. But otherwise. . .

Comments (74) + TrackBacks (0) | Category: "Me Too" Drugs | Business and Markets | Cancer | Drug Development | Drug Industry History | Drug Prices | The Central Nervous System | Why Everyone Loves Us

July 16, 2012

AstraZeneca Admits It Spent Too Much Money

Email This Entry

Posted by Derek

Looks like AstraZeneca's internal numbers agree with Matthew Herper's. The company was talking about its current R&D late last week, and this comment stands out:

Discovery head Mene Pangalos told reporters on Thursday that mistakes had been in the past by encouraging quantity over quality in early drug selection.

"If you looked at our output in terms of numbers of candidates entering the clinic, we were one of the most productive companies in the world, dollar for dollar. If you rated us by how many drugs we launched, we were one of the least successful," he said.

Yep, sending compounds to the clinic is easy - you just declare them to be Clinical Candidates, and the job is done. Getting them through the clinic, now, that's harder, because at that point you're encountering things that can't be rah-rah-ed. Viruses and bacteria, neurons and receptors and tumor cells, they don't care so much about your goals statement and your Corporate Commitment to Excellence. In the end, that's one of the things I like most about research: the real world has the last laugh.

The news aggregator Biospace has a particularly misleading headline on all this: "AstraZeneca Claims Neuroscience Shake-Up is Paying Off ; May Advance at Least 8 Drugs to Final Tests by 2015". I can't find anyone from AZ putting it in quite those terms, fortunately. That would be like saying that my decision, back in Boston, to cut costs by not filling my gas tank is paying off as I approach Philadelphia.

Comments (27) + TrackBacks (0) | Category: Business and Markets | Clinical Trials | Drug Development

June 26, 2012

The Next Five Years in the Drug Industry

Email This Entry

Posted by Derek

Nature Reviews Drug Discovery has an article on the current state of drug development, looking at what's expected to be launched from 2012 to 2016. There's a lot of interesting information, but this is the sentence that brought me up short: "the global pipeline has stopped growing". The total number of known projects in the drug industry (preclinical to Phase III) now appears to have peaked in 2009, at just over 7700. It's now down to 7400, and the biggest declines are in the early stages, so the trend is going to continue for a while.

But before we all hit the panic button, it looks like this is a somewhat artificial decline, since it was based on an artificial peak. In 2006, the benchmark year for the 2007-2011 cohort of launched drugs, there were only about 6100 projects going. I'm not sure what led to the rise over the next three years after that, but we're still running higher. So while I can't say that it's healthy that the number of projects has been declining, we may be largely looking at some sort of artifact in the data. Worth keeping an eye on.

And the authors go on to say that this larger number of new projects, compared to the previous five-year period, should in fact lead to a slight rise in the number of new drugs approved, even if you assume that the success rates drop off a bit. They're guessing 30 to 35 launches per year, well above the post-2000 average. Peak sales for these new products, though, are probably not going to match the historical highs, so that needs to be taken into account.

More data: the coming cohort of new drugs is expected to be a bit more profitable, and a bit more heavily weighted towards small molecules rather than biologics. Two-thirds of the revenues from this coming group are expected to be from drugs that are already in some sort of partnership arrangement, and you'd have to think that this number will increase further for the later-blooming candidates. The go-it-alone blockbuster compound really does seem to be a relative rarity - the complexity and cost of large clinical trials, and the worldwide regulatory and marketing landscape have seen to that.

As for therapeutic area, oncology has the highest number of compounds in development (26% of them as of 2011). It's to the point that the authors wonder if there's an "oncology bubble" on the way, since there are between 2 and 3 compounds chasing each major oncology target. Personally, I think that these compounds are probably still varied enough to make places for themselves, considering the wildly heterogeneous nature of the market. But it's going to be a messy process, figuring out what compounds are useful for which cases.

So in the near term, overall, it looks like things are going to hold together. Past that five-year mark, though, predictions get fuzzier, and the ten-year situation is impossible to forecast at all. That, in fact, is going to be up to those of us doing early research. The shape we're in by that time will be determined, perhaps, by what we go out into the labs and do today. I have a tool compound to work up, to validate (I hope) an early assay, and another project to pay attention to this afternoon. 2022 is happening now.

Update: here are John LaMattina's thoughts on this analysis, asking about some things that may not have been taken into account.

Comments (16) + TrackBacks (0) | Category: Business and Markets | Drug Development

June 13, 2012

Live By The Bricks, Die By The Bricks

Email This Entry

Posted by Derek

I wanted to highlight a couple of recent examples from the literature to show what happens (all too often) when you start to optimize med-chem compounds. The earlier phases of a project tend to drive on potency and selectivity, and the usual way to get these things is to add more stuff to your structures. Then as you start to produce compounds that make it past those important cutoffs, your focus turns more to pharmacokinetics and metabolism, and sometimes you find you've made your life rather difficult. It's an old trap, and a well-known one, but that doesn't stop people from sticking a leg into it.

Take a look at these two structures from ACS Chemical Biology. The starting structure is a pretty generic-looking kinase inhibitor, and as the graphic to its left shows, it does indeed hit a whole slew of kinases. These authors extended the structure out to another loop of the their desired target, c-Src, and as you can see, they now have a much more selective compound.
kinase%20inhibitor.png
But at such a price! Four more aromatic rings, including the dread biphenyl, and only one sp3 carbon in the lot. The compound now tips the scales at MW 555, and looks about as soluble as the Chrysler building. To be fair, this is an academic group, which mean that they're presumably after a tool compound. That's a phrase that's used to excuse a lot of sins, but in this case they do have cellular assay data, which means that despite this compound's properties, it's managing to do something. Update: see this comment from the author on this very point. Be warned, though, if you're in drug discovery and you follow this strategy. Adding four flat rings and running up the molecular weight might work for you, but most of the time it will only lead to trouble - pharmacokinetics, metabolic clearance, toxicity, formulation.

My second example is from a drug discovery group (Janssen). They report work on a series of gamma-secretase modulators (GSMs) for Alzheimer's. You can tell from the paper that they had quite a wild ride with these things - for one, the activity in their mouse model didn't seem to correlate at all with the concentration of the compounds in the brain. Looking at those structures, though, you have to think that trouble is lurking, and so it is.
secretase.png

"In all chemical classes, the high potency was accompanied by high lipophilicity (in general, cLogP >5) and a TPSA [topological polar surface area] below 75 Å, explaining the good brain penetration. However, the majority of compounds also suffered from hERG binding with IC50s below 1 μM, CyP inhibition and low solubility, particularly at pH = 7.4 (data not shown). These unfavorable ADME properties can likely be attributed to the combination of high lipophilicity and low TPSA.

That they can. By the time they got to that compound 44, some of these problems had been solved (hERG, CyP). But it's still a very hard-to-dose compound (they seem to have gone with a pretty aggressive suspension formulation) and it's still a greasy brick, despite its impressive in vivo activity. And that's my point. Working this way exposes you to one thing after another. Making greasy bricks often leads to potent in vitro assay numbers, but they're harder to get going in vivo. And if you get them to work in the animals, you often face PK and metabolic problems. And if you manage to work your way around those, you run a much higher risk of nonspecific toxicity. So guess what happened here? You have to go to the very end of the paper to find out:

As many of the GSMs described to date, the series detailed in this paper, including 44a, is suffering from suboptimal physicochemical properties: low solubility, high lipophilicity, and high aromaticity. For 44a, this has translated into signs of liver toxicity after dosing in dog at 20 mg/kg. Further optimization of the drug-like properties of this series is ongoing.

Back to the drawing board, in other words. I wish them luck, but I wonder how much of this structure is going to have to be ripped up and redone in order to get something cleaner?

Comments (39) + TrackBacks (0) | Category: Alzheimer's Disease | Cancer | Drug Development | Pharmacokinetics | Toxicology

June 12, 2012

Predicting Toxicology

Email This Entry

Posted by Derek

One of the major worries during a clinical trial is toxicity, naturally. There are thousands of reasons a compound might cause problem, and you can be sure that we don't have a good handle on most of them. We screen for what we know about (such as hERG channels for cardiovascular trouble), and we watch closely for signs of everything else. But when slow-building low-incidence toxicity takes your compound out late in the clinic, it's always very painful indeed.

Anything that helps to clarify that part of the business is big news, and potentially worth a lot. But advanced in clinical toxicology come on very slowly, because the only thing worse than not knowing what you'll find is thinking that you know and being wrong. A new paper in Nature highlights just this problem. The authors have a structural-similarity algorithm to try to test new compounds against known toxicities in previously tested compounds, which (as you can imagine) is an approach that's been tried in many different forms over the years. So how does this one fare?

To test their computational approach, Lounkine et al. used it to estimate the binding affinities of a comprehensive set of 656 approved drugs for 73 biological targets. They identified 1,644 possible drug–target interactions, of which 403 were already recorded in ChEMBL, a publicly available database of biologically active compounds. However, because the authors had used this database as a training set for their model, these predictions were not really indicative of the model's effectiveness, and so were not considered further.

A further 348 of the remaining 1,241 predictions were found in other databases (which the authors hadn't used as training sets), leaving 893 predictions, 694 of which were then tested experimentally. The authors found that 151 of these predicted drug–target interactions were genuine. So, of the 1,241 predictions not in ChEMBL, 499 were true; 543 were false; and 199 remain to be tested. Many of the newly discovered drug–target interactions would not have been predicted using conventional computational methods that calculate the strength of drug–target binding interactions based on the structures of the ligand and of the target's binding site.

Now, some of their predictions have turned out to be surprising and accurate. Their technique identified chlorotrianisene, for example, as a COX-1 inhibitor, and tests show that it seems to be, which wasn't known at all. The classic antihistamine diphenhydramine turns out to be active at the serotonin transporter. It's also interesting to see what known drugs light up the side effect assays the worst. Looking at their figures, it would seem that the topical antiseptic chlorhexidine (a membrane disruptor) is active all over the place. Another guanidine-containing compound, tegaserod, is also high up the list. Other promiscuous compounds are the old antipsychotic fluspirilene and the semisynthetic antibiotic rifaximin. (That last one illustrates one of the problems with this approach, which the authors take care to point out: toxicity depends on exposure. The dose makes the poison, and all that. Rifaximin is very poorly absorbed, and it would take very unusual dosing, like with a power drill, to get it to hit targets in a place like the central nervous system, even if this technique flags them).

The biggest problem with this whole approach is also highlighted by the authors, to their credit. You can see from those figures above that about half of the potentially toxic interactions it finds aren't real, and you can be sure that there are plenty of false negatives, too. So this is nowhere near ready to replace real-world testing; nothing is. But where it could be useful is in pointing out things to test with real-world assays, activities that you probably hadn't considered at all.

But the downside of that is that you could end up chasing meaningless stuff that does nothing but put the fear into you and delays your compound's development, too. That split, "stupid delay versus crucial red flag", is at the heart of clinical toxicology, and is the reason it's so hard to make solid progress in this area. So much is riding on these decisions: you could walk away from a compound, never developing one that would go on to clear billions of dollars and help untold numbers of patients. Or you could green-light something that would go on to chew up hundreds of millions of dollars of development costs (and even more in opportunity costs, considering what you could have been working on instead), or even worse, one that makes it onto the market and has to be withdrawn in a blizzard of lawsuits. It brings on a cautious attitude.

Comments (21) + TrackBacks (0) | Category: Drug Development | In Silico | Toxicology

June 5, 2012

Merck Finds Its Phase II Candidates For Sale on the Internet

Email This Entry

Posted by Derek

Via Pharmalot, it appears that a former WuXi employee helped himself to samples of two Merck Phase II clinical candidates that were under evaluation. The samples were then offered for sale.

Here's a link to a Google Translate version of a Chinese news report. It looks like gram quantities were involved, along with NMR spectra, with the compounds being provided to a middleman. It's not clear who bought them from him, but the article gives the impression that someone did, was satisfied with the transaction, and wanted more. But in the meantime, Merck did pick up on an offer made by this middleman to sell one of the compounds online, and immediately went after him, which unraveled the whole scheme. (The machine translation is pretty rocky, but I did appreciate that an idiom came through: it mentions that having these valuable samples in an unlocked cabinet was like putting fish in front of a cat).

I would think that this kind of thing is just the nightmare that WuXi's management fears - and if it isn't, it should be. The cost advantage to doing business with them (and other offshore contract houses) is still real, but not as large as it used to be. Stories like this can close that price gap pretty quickly.

Comments (45) + TrackBacks (0) | Category: Business and Markets | Drug Development | The Dark Side

June 4, 2012

Scaling Up Arteminisin

Email This Entry

Posted by Derek

A recent article in Science illustrates a number of points about drug development and scale-up. It's about artemisinin, the antimalarial. Peter Seeberger, a German professor of chemistry (Max Planck-Potsdam), has worked out what looks like a good set of conditions for a key synthetic step (dihydroartemisinic acid to artemisinin), and would like to see these used on large scale to bring the cost of the drug down.

That sounds like a reasonably simple story, but it isn't. Here are a few of the complications:

But Seeberger's method has yet to prove its mettle. It needs to be scaled up, and he can't say how much prices would come down if it worked. Using it in a large facility would require a massive investment, and so far, nobody has stepped up to the plate. What's more, pharma giant Sanofi will open a brand-new facility later this year to make artemisinin therapies based on Amyris's technology: yeast cells that produce a precursor of the drug. Although Seeberger says his discovery would complement that process, Sanofi says it's too late now to adopt it.

The usual route has been to extract arteminisin from its source, Artemisia annua. That's been quite a boom-and-bust cycle over the years, and the price has never really been steady (or particularly low, either). Amyris worked for some years to engineer yeast to produce artemisinic acid, which can then be extracted and converted into the final drug, and this is what's now being scaled up with Sanofi-Aventis.

That process also uses a photochemical oxidation, but in batch mode. I'm a big fan of flow chemistry, and I've done some flow photochemistry myself, and I can agree that when it's optimized, it can be a great improvement over such batch conditions. Seeberger's method looks promising, but Sanofi isn't ready to retool to use it when they have their current conditions worked out. Things seem to be at an impass:

But what will happen with Seeberger's discovery is still unclear. Sanofi's plant is about to open, and the company isn't going to bet on an entirely new technique that has yet to prove that it can be scaled up. In an e-mail to Science, the company calls Seeberger's solution “a clever approach,” but says that “so far the competitivity of this technique has not been demonstrated.”

The ideal solution would be if other companies adopt the combination of Amyris's yeast cells and Seeberger's method, [Michigan supply-chain expert] Yadav says; “then, the price for the drugs could go down significantly.” But a spokesperson for OneWorld Health, the nonprofit pharmaceutical company that has backed Sanofi's project, says there are no plans to make the yeast cells available to any other party.

Seeberger himself is trying to make something happen:

On 19 April, Seeberger invited interested parties to a meeting in Berlin to explore the options. They included representatives of Artemisia growers and extractors, pharmaceutical companies GlaxoSmithKline and Boehringer Ingelheim, as well as the Clinton Foundation, UNITAID, and the German Agency for International Cooperation. (The Bill and Melinda Gates Foundation canceled at the last minute.) None of the funders wanted to discuss the meeting with Science. Seeberger says he was asked many critical questions—“But then the next day, my phone did not stop ringing.” He is now in discussions with several interested parties, he says.

As I say, I like his chemistry. But I can sympathize with the Sanofi people as well. Retooling a working production route is not something you undertake lightly, and the Seeberger chemistry will doubtless need some engineering along the way to reach its potential. The best solution seems to me to be basically what's happening: Sanofi cranks out the drug using its current process, which should help a great deal with the supply in the short term. Meanwhile, Seeberger tries to get his process ready for the big time, with the help of an industrial partner. I wish him luck, and I hope things don't stall out along the way. More on all this as it develops over the next few months.

Comments (22) + TrackBacks (0) | Category: Drug Development

May 24, 2012

An Oral Insulin Pill?

Email This Entry

Posted by Derek

Bloomberg has an article on Novo Nordisk and their huge ongoing effort to come up with an orally available form of insulin. That's been a dream for a long time now, but it's always been thought to be very close to impossible. The reasons for this are well known: your gut will treat a big protein like insulin pretty much like it treats a hamburger. It'll get digested, chopped into its constituent amino acids, and absorbed as non-medicinally-active bits which are used as raw material once inside the body. That's what digestion is. The gut wall specifically guards against letting large biomolecules through intact.

So you're up against a lot of defenses when you try to make something like oral insulin. Modifying the protein itself to make it more permeable and stable will be a big part of it, and formulating the pill to escape the worst of the gut environments will be another. Even then, you have to wonder about patient-to-patient variability in digestion, intestinal flora, and so on. The dosing is probably going to have to be pretty strict with respect to meals (and the content of those meals).

But insulin dosing is always going to be strict, because there's a narrow window to work in. That's one of the factors that's helped to sink so many other alternative-dosing schemes for it, most famously Pfizer's Exubera. The body's response to insulin in brittle in the extreme. If you take twice as much antihistamine as you should, you may feel funny. If you take twice as much insulin as you should, you're going to be on the floor, and you may stay there.

So I salute Novo Nordisk for trying this. The rewards will be huge if they get it to work, but it's a long way from working just yet.

Comments (32) + TrackBacks (0) | Category: Diabetes and Obesity | Drug Development | Pharmacokinetics

May 23, 2012

Another Vote Against Rhodanines

Email This Entry

Posted by Derek

For those of you who'd had to explain to colleagues (in biology or chemistry) why you're not enthusiastic about the rhodanine compounds that came out of your high-throughput screening effort, there's now another paper to point them to.

The biological activity of compounds possessing a rhodanine moiety should be considered very critically despite the convincing data obtained in biological assays. In addition to the lack of selectivity, unusual structure–activity relationship profiles and safety and specificity problems mean that rhodanines are generally not optimizable.

That's well put, I think, although this has been a subject of debate. I would apply the same language to the other "PAINS" mentioned in the Baell and Holloway paper, which brought together a number of motifs that have set off alarm bells over the years. These structures are guilty until proven innocent. If you have a high-value target and feel that it's worth the time and trouble to prove them so, that may well be the right decision. But if you have something else to advance, you're better off doing so. As I've said here before, ars longa, pecunia brevis.

Comments (3) + TrackBacks (0) | Category: Drug Assays | Drug Development

May 22, 2012

The NIH's Drug Repurposing Initiative: Will It Be a Waste?

Email This Entry

Posted by Derek

The NIH's attempt to repurpose shelved development compounds and other older drugs is underway:

The National Institutes of Health (NIH) today announced a new plan for boosting drug development: It has reached a deal with three major pharmaceutical companies to share abandoned experimental drugs with academic researchers so they can look for new uses. NIH is putting up $20 million for grants to study the drugs.

"The goal is simple: to see whether we can teach old drugs new tricks," said Health and Human Services Secretary Kathleen Sebelius at a press conference today that included officials from Pfizer, AstraZeneca, and Eli Lilly. These companies will give researchers access to two dozen compounds that passed through safety studies but didn't make it beyond mid-stage clinical trials. They shelved the drugs either because they didn't work well enough on the disease for which they were developed or because a business decision sidelined them.

There are plenty more where those came from, and I certainly wish people luck finding uses for them. But I've no idea what the chances for success might be. On the one hand, having a compound that's passed all the preclinical stages of development and has then been into humans is no small thing. On that ever-present other hand, though, randomly throwing these compounds against unrelated diseases is unlikely to give you anything (there aren't enough of them to do that). My best guess is that they have a shot in closely related disease fields - but then again, testing widely might show us that there are diseases that we didn't realized were related to each other.

John LaMattina is skeptical:

Well, the NIH has recently expanded the remit of NCATS. NCATS will now be testing drugs that have been shelved by the pharmaceutical industry for other potential uses. The motivation for this is simple. They believe that these once promising but failed compounds could have other uses that the inventor companies haven’t yet identified. I’d like to reiterate the view of Dr. Vagelos – it’s fairy time again.

My views on this sort of initiative, which goes by a variety of names – “drug repurposing,” “drug repositioning,” “reusable drugs” – have been previously discussed in my blog. I do hope that people can have success in this type of work. But I believe successes are going to be rare.

The big question is, rare enough to count the money and time as wasted, or not? I guess we'll find out. Overall, I'd rather start with a compound that I know does what I want it to do, and then try to turn it into a drug (phenotypic screening). Starting with a compound that you know is a drug, but doesn't necessarily do what you want it to, is going to be tricky.

Comments (33) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development | Drug Industry History

May 21, 2012

A New Way to Kill Amoebas, From An Old Drug

Email This Entry

Posted by Derek

Here's a good example of phenotypic screening coming through with something interesting and worthwhile: they screened against Entamoeba histolytica, the protozooan that causes amoebic dysentery and kills tens of thousands of people every year. (Press coverage here).

It wasn't easy. The organism is an anaerobe, which is a bad fit for most robotic equipment, and engineering a decent readout for the assay wasn't straightforward, either. They did have a good positive control, though - the nitroimidazole drug metronidazole, which is the only agent approved currently against the parasite (and to which it's becoming resistant). A screen of nearly a thousand known drugs and bioactive compounds showed eleven hits, of which one (auranofin) was much more active than metronidazole itself.

Auranofin's an old arthritis drug. It's a believable result, because the compound has also been shown to have activity against trypanosomes, Leishmania parasites, and Plasmodium malaria parasites. This broad-spectrum activity makes some sense when you realize that the drug's main function is to serve as a delivery vehicle for elemental gold, whose activity in arthritis is well-documented but largely unexplained. (That activity is also the basis for persistent theories that arthritis may have an infectious-disease component).

The target in this case may well be arsenite-inducible RNA-associated protein (AIRAP), which was strongly induced by drug treatment. The paper notes that arsenite and auranofin are both known inhibitors of thioredoxin reductase, which strongly suggests that this is the mechanistic target here. The organism's anaerobic lifestyle fits in with that; this enzyme would presumably be its main (perhaps only) path for scavenging reactive oxygen species. It has a number of important cysteine residues, which are very plausible candidates for binding to a metal like gold. And sure enough, auranofin (and two analogs) are potent inhibitors of purified form of the amoeba enzyme.

The paper takes the story all the way to animal models, where auranofin completely outperforms metronidazole. The FDA has now given it orphan-drug status for amebiasis, and the way appears clear for a completely new therapeutic option in this disease. Congratulations to all involved; this is excellent work.

Comments (10) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development | Infectious Diseases

A Molecular Craigslist?

Email This Entry

Posted by Derek

Mat Todd at the University of Sydney (whose open-source drug discovery work on schistosomiasis I wrote about here) has an interesting chemical suggestion. His lab is also involved in antimalarial work (here's an update, for those interested, and I hope to post about this effort more specifically). He's wondering about whether there's room for a "Molecular Craigslist" for efforts like these:

Imagine there is a group somewhere with expertise in making these kinds of compounds, and who might want to make some analogs as part of a student project, in return for collaboration and co-authorship? What about a Uni lab which might be interested in making these compounds as part of an undergrad lab course?

Wouldn’t it be good if we could post the structure of a molecule somewhere and have people bid on providing it? i.e. anyone can bid – commercial suppliers, donators, students?

Is there anything like this? Well, databases like Zinc and Pubchem can help in identifying commercial suppliers and papers/patents where groups have made related compounds, but there’s no tendering process where people can post molecules they want. Science Exchange has, I think, commercial suppliers, but not a facility to allow people to donate (I may be wrong), or people to volunteer to make compounds (rather than be listed as generic suppliers. Presumably the same goes for eMolecules, and Molport?

Is there a niche here for a light client that permits the process I’m talking about? Paste your Smiles, post the molecule, specifying a purpose (optional), timeframe, amount, type of analytical data needed, and let the bidding commence?

The closest thing I can think of is Innocentive, which might be pretty close to what he's talking about. It's reasonably chemistry-focused as well. Any thoughts out there?

Comments (19) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development | Infectious Diseases

May 17, 2012

A Preventative Trial for Alzheimer's: The Right Experiment

Email This Entry

Posted by Derek

Alzheimer's disease is in the news, as the first major preventative drug trial gets underway. I salute the people who have made this happen, because we're bound to learn a lot from the attempt, even while I fear the chances for success are not that good.

A preventative trial for Alzheimer's would, under normal circumstances, be a nightmarish undertaking. The disease is quite variable and comes on slowly, and it's proven very difficult to predict who might start to show symptoms as they age. You'd be looking at dosing a very large number of people (thousands, even tens of thousands?) for a very long time (years, maybe a decade or two?) in order to have a chance at statistical significance. And you would, in the course of things, be giving a lot of drug to a lot of people who (in the end) would have turned out not to need it. No, it's no surprise that no one's gone that route.

But there's a way out of that impasse: find a population with some sort of amyloid-pathway mutation. Now you know exactly who will come down with symptoms, and (unfortunately) you also know that they're going to come down with them earlier and more quickly as well. There are several of these around the world; the "Swedish" and "Dutch" mutations are probably the most famous. There's a Colombian mutation too, with a well-defined patient population that's been studied for years, and that's where this new study will take place.

About 300 people will be given an experimental antibody therapy to amyloid protein, crenezumab. This was developed by AC Immune in Switzerland and licensed to Genentech, and is one of many amyloid-targeted antibodies that have come along over the years. (The best-known is bapineuzumab, currently in Phase III). Genentech (Roche) will be putting up the majority of the money for the trial ($65 million, with $16 million from the NIH and $15 million in private foundation money). Just in passing, weren't some people trying to convince everyone a year ago that it only costs $43 million total to develop a new drug? Har, har.

100 people with the mutation will get the antibody every two weeks, and 100 more will get placebo. There are also 100 non-carriers mixed in, who will all get placebo, because some carriers have indicated that they don't want to know their status. Everyone will go through a continuing battery of cognitive and psychological tests, as well as brain imaging and a great deal of blood work, which (if we're lucky) could furnish tips towards clinical biomarkers for future trials.

So overall, I think that this trial is an excellent idea, and I very much hope that a lot of useful information comes out of it. But I've no firm hopes that it will pan out therapeutically. This will be a direct test of the amyloid hypothesis for Alzheimer's, and although there's a tremendous amount of evidence for that line of thought, there's a lot against it as well. Anyone who really thinks they know what will happen in this situation hasn't thought hard enough about it. But that's the best kind of experiment, isn't it?

Comments (18) + TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | Drug Development

May 11, 2012

Competitive Intelligence: Too Much or Too Little?

Email This Entry

Posted by Derek

Drug companies are very attuned to competitive intelligence. There's a lot of information sloshing around out there, and you'd be wise to pay attention to it. Publications in journals are probably the least of it - by the time something written up for publication from inside a pharma company, it's either about to be on the drugstore shelves or it never will be at all. Patents are far more essential, and if you're going to watch anything, you should watch the patent applications in your field.

But there's more. Meetings are a big source of disclosure, as witness the Wall Street frenzies around ASCO and the like. Talks and posters release information that won't show up in the literature for a long time (if indeed it ever does). And there are plenty of other avenues. The question is, though, how much time and money do you want to spend on this sort of thing?

There are commercial services (such as Integrity) that monitor companies, compounds, and therapeutic areas in this fashion, and they're happy to sell you their services, which are not cheap. But figuring out the cost/benefit ratio isn't easy. My guess is that these things, while useful, can be thought of as insurance. You're paying to make sure that something big doesn't happen that you're unware or (or unaware of in enough time).

So here's a question for the readership: has competitive intelligence ever made a big difference for you? Positive and negative results both welcome; "I'm so glad we found out about X" versus "I really wish we'd known about Y". Any thoughts?

Comments (16) + TrackBacks (0) | Category: Drug Development

May 3, 2012

A Long-Delayed COX2 Issue Gets Settled - For $450 Million?

Email This Entry

Posted by Derek

Has the last shot been fired, very quietly, in the COX-2 discovery wars? Here's the background, in which some readers of this site have probably participated at various times. Once it was worked out that the nonsteroidal antiinflammatory drugs (aspirin, ibuprofen et al.) were inhibitors of the enzyme cyclooxygenase, it began to seem likely that there were other forms of the enzyme as well. But for a while, no one could put their hands on one. That changed in the early 1990s, when Harvey Herschman at UCLA reported the mouse COX2 gene. The human analog was discovered right on the heels of that one, with priority usually given to Dan Simmons of BYU, with Donald Young of the University of Rochester there at very nearly the same time.

The Rochester story is one that many readers will be familiar with. The university, famously, obtained a patent for compounds that exerted a therapeutic effect through inhibition of COX-2, without specifying what compounds those might be. They did not, in fact, have any, nor did they give any hints about what they'd look like, and this is what sank them in the end when the university lost its case against Searle (and its patent) for not fulfilling the "written description" requirement.

But there was legal action on the BYU end of things, too. Simmons and the university filed suit several years ago, saying that Simmons had entered into a contract with Monsanto in 1991 to discover COX2 inhibitors. The suit claimed that Monsanto had (wrongly) advised Simmons not to file for a patent on his discoveries, and had also reversed course, terminating the deal to concentrate on the company's internal efforts instead once it had obtained what it needed from the Simmons work.

That takes us to the tangled origin of the COX2 chemical matter. The progenitor compound is generally taken to be DuP-697, which was discovered and investigated before the COX-2 enzyme was even characterized. The compound had a strong antiinflammatory profile which was nonetheless different from the NSAIDS, which led to strong suspicions that it was indeed acting through the putative "other cyclooxygenase". And so it proved, once the enzyme was discovered, and a look at its structure versus the marketed drugs shows that it was a robust series of structures indeed.

One big difference between the BYU case and the Rochester case was the Simmons did indeed have a contract, and it was breach-of-contract that formed the basis for the suit. The legal maneuverings have been going on for several years now. But now Pfizer has issued a press release saying that they have reached "an amicable settlement on confidential terms". The only real detail given is that they're going to establish the Dan Simmons Chair at BYU in recognition of his work.

But there may be more to it than that. Pfizer has also reported taking a $450 million charge against earnings related to this whole matter, which certainly makes one think of Latin sayings, among them post hoc, ergo propter hoc and especially quid pro quo. We may not ever get the full details, since part of the deal would presumably include not releasing them. But it looks like a substantial sum has changed hands.

Comments (12) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Patents and IP

April 30, 2012

India's First Drug Isn't India's First Drug

Email This Entry

Posted by Derek

There have been a number of headlines the last few days about Ranbaxy's Synriam, an antimalarial that's being touted as the first new drug developed inside the Indian pharma industry (and Ranbaxy as the first Indian company to do it).

But that's not quite true, as this post from The Allotrope makes clear. (Its author, Akshat Rathi, found one of my posts when he started digging into the story). Yes, Synriam is a mixture of a known antimalarial (piperaquine) and arterolane. And arterolane was definitely not discovered in India. It was part of a joint effort from the US, UK, Australia, and Switzerland, coordinated by the Swiss-based Medicines for Malaria Venture.

Ranbaxy did take on the late-stage development of this drug combination, after MMV backed out due to no-so-impressive performance in the clinic. As Rathi puts it:

Although Synriam does not qualify as ‘India’s first new drug’ (because none of its active ingredients were wholly developed in India), Ranbaxy deserves credit for being the first Indian pharmaceutical company to launch an NCE before it was launched anywhere else in the world.

And that's something that not many countries have done. I just wish that Ranbaxy were a little more honest about that in their press release.

Comments (8) + TrackBacks (0) | Category: Drug Development | Infectious Diseases

April 12, 2012

A Federation of Independent Researchers?

Email This Entry

Posted by Derek

I've had an interesting e-mail from a reader who wants to be signed as "Mrs. McGreevy", and it's comprehensive enough that I'm going to reproduce it in full below.

As everyone but the editorial board of C&E News has noticed, jobs in chemistry are few and far between right now. I found your post on virtual biotechs inspiring, but it doesn't look like anyone has found a good solution for how to support these small firefly businesses until they find their wings, so to speak. Lots of editorials, lots of meetings, lots of rueful headshaking, no real road map forward for unemployed scientists.

I haven't seen this proposed anywhere else, so I'm asking you and your readership if this idea would fly:

What about a voluntary association of independent research scientists?

I'm thinking about charging a small membership fee (for non-profit administration and hard costs) and using group buying power for the practical real-world support a virtual biotech would need:

1. Group rates on health and life insurance.

How many would-be entrepreneurs are stuck in a job they hate because of the the health care plan, or even worse, are unemployed or underemployed and uninsurable, quietly draining their savings accounts and praying no one gets really sick? I have no idea how this would work across state lines, or if it is even possible,but would it hurt to find out? Is anyone else looking?

2. Group rates on access to journals and library services.

This is something I do know a bit about. My M.S. is in library science, and I worked in the Chemistry Library in a large research institution for years during grad school. What if there were one centralized virtual library to which unaffiliated researchers across the country could log in for ejournal access? What if one place could buy and house the print media that start-ups would need to access every so often, and provide a librarian to look things up-- it's not like everyone needs their own print copy of the Canada & US Drug Development Industry & Outsourcing Guide 2012 at $150 a pop. (But if 350 people paid $1 a year for a $350/yr online subscription . . . )

Yes, some of you could go to university libraries and look these things up and print off articles to read at home, but some of you can't. You're probably violating some sort of terms of service agreement the library and publisher worked out anyway. It's not like anyone is likely to bust you unless you print out stacks and stacks of papers, but still. It's one more hassle for a small company to deal with, and everyone will have to re-invent the wheel and waste time and energy negotiating access on their own.

3. How about an online community for support and networking-- places for blogs, reviews, questions, answers, exchanges of best practices, or even just encouragement for that gut-wrenching feeling of going out on your own as a new entrepreneur?

4. What sort of support for grantwriting is out there? Is there a hole that needs to be filled?

5. How about a place to advertise your consulting services or CRO, or even bid for a contract? Virtual RFP posting?

6. Would group buying power help negotiate rates with CROs? How about rates for HTS libraries, for those of you who haven't given up on it completely?

Is there a need for this sort of thing? Would anyone use it if it were available? How much would an unaffiliated researcher be willing to pay for the services? Does anyone out there have an idea of what sort of costs are involved, and what sort of critical mass it would take to achieve the group buying power needed to make this possible?

I'd be happy to spark a discussion on what a virtual biotech company needs besides a spare bedroom and a broadband connection, even if the consensus opinion is that the OP an ill-informed twit with an idea that will never fly. What do you need to get a virtual biotech started? How do we make it happen? There are thousands of unemployed lab scientists, and I refuse to believe that the only guy making a living these days from a small independently-funded lab is Bryan Cranston.

A very worthy topic indeed, and one whose time looks to have come. Thoughts on how to make such a thing happen?

Comments (59) + TrackBacks (0) | Category: Business and Markets | Drug Development | General Scientific News | The Scientific Literature

April 4, 2012

The Artificial Intelligence Economy?

Email This Entry

Posted by Derek

Now here's something that might be about to remake the economy, or (on the other robotic hand) it might not be ready to just yet. And it might be able to help us out in drug R&D, or it might turn out to be mostly beside the point. What the heck am I talking about, you ask? The so-called "Artificial Intelligence Economy". As Adam Ozimek says, things are looking a little more futuristic lately.

He's talking about things like driverless cars and quadrotors, and Tyler Cowen adds the examples of things like Apple's Siri and IBM's Watson, as part of a wider point about American exports:

First, artificial intelligence and computing power are the future, or even the present, for much of manufacturing. It’s not just the robots; look at the hundreds of computers and software-driven devices embedded in a new car. Factory floors these days are nearly empty of people because software-driven machines are doing most of the work. The factory has been reinvented as a quiet place. There is now a joke that “a modern textile mill employs only a man and a dog—the man to feed the dog, and the dog to keep the man away from the machines.”

The next steps in the artificial intelligence revolution, as manifested most publicly through systems like Deep Blue, Watson and Siri, will revolutionize production in one sector after another. Computing power solves more problems each year, including manufacturing problems.

Two MIT professors have written a book called Race Against the Machine about all this, and it appears to be sort of a response to Cowen's earlier book The Great Stagnation. (Here's an article of theirs in The Atlantic making their case).

One of the export-economy factors that it (and Cowen) bring up is that automation makes a country's wages (and labor costs in general) less of a factor in exports, once you get past the capital expenditure. And as the size of that expenditure comes down, it becomes easier to make that leap. One thing that means, of course, is that less-skilled workers find it harder to fit in. Here's another Atlantic article, from the print magazine, which looked at an auto-parts manufacturer with a factory in South Carolina (the whole thing is well worth reading):

Before the rise of computer-run machines, factories needed people at every step of production, from the most routine to the most complex. The Gildemeister (machine), for example, automatically performs a series of operations that previously would have required several machines—each with its own operator. It’s relatively easy to train a newcomer to run a simple, single-step machine. Newcomers with no training could start out working the simplest and then gradually learn others. Eventually, with that on-the-job training, some workers could become higher-paid supervisors, overseeing the entire operation. This kind of knowledge could be acquired only on the job; few people went to school to learn how to work in a factory.
Today, the Gildemeisters and their ilk eliminate the need for many of those machines and, therefore, the workers who ran them. Skilled workers now are required only to do what computers can’t do (at least not yet): use their human judgment.

But as that article shows, more than half the workers in that particular factory are, in fact, rather unskilled, and they make a lot more than their Chinese counterparts do. What keeps them employed? That calculation on what it would take to replace them with a machine. The article focuses on one of those workers in particular, named Maddie:

It feels cruel to point out all the Level-2 concepts Maddie doesn’t know, although Maddie is quite open about these shortcomings. She doesn’t know the computer-programming language that runs the machines she operates; in fact, she was surprised to learn they are run by a specialized computer language. She doesn’t know trigonometry or calculus, and she’s never studied the properties of cutting tools or metals. She doesn’t know how to maintain a tolerance of 0.25 microns, or what tolerance means in this context, or what a micron is.

Tony explains that Maddie has a job for two reasons. First, when it comes to making fuel injectors, the company saves money and minimizes product damage by having both the precision and non-precision work done in the same place. Even if Mexican or Chinese workers could do Maddie’s job more cheaply, shipping fragile, half-finished parts to another country for processing would make no sense. Second, Maddie is cheaper than a machine. It would be easy to buy a robotic arm that could take injector bodies and caps from a tray and place them precisely in a laser welder. Yet Standard would have to invest about $100,000 on the arm and a conveyance machine to bring parts to the welder and send them on to the next station. As is common in factories, Standard invests only in machinery that will earn back its cost within two years. For Tony, it’s simple: Maddie makes less in two years than the machine would cost, so her job is safe—for now. If the robotic machines become a little cheaper, or if demand for fuel injectors goes up and Standard starts running three shifts, then investing in those robots might make sense.

At this point, some similarities to the drug discovery business will be occurring to readers of this blog, along with some differences. The automation angle isn't as important, or not yet. While pharma most definitely has a manufacturing component (and how), the research end of the business doesn't resemble it very much, despite numerous attempts by earnest consultants and managers to make it so. From an auto-parts standpoint, there's little or no standardization at all in drug R&D. Every new drug is like a completely new part that no one's ever built before; we're not turning out fuel injectors or alternators. Everyone knows how a car works. Making a fundamental change in that plan is a monumental challenge, so the auto-parts business is mostly about making small variations on known components to the standards of a given customer. But in pharma - discovery pharma, not the generic companies - we're wrenching new stuff right out of thin air, or trying to.

So you'd think that we wouldn't be feeling the low-wage competitive pressure so much, but as the last ten years have shown, we certainly are. Outsourcing has come up many a time around here, and the very fact that it exists shows that not all of drug research is quite as bespoke as we might think. (Remember, the first wave of outsourcing, which is still very much a part of the business, was the move to send the routine methyl-ethyl-butyl-futile analoging out somewhere cheaper). And this takes us, eventually, to the Pfizer-style split between drug designers (high-wage folks over here) and the drug synthesizers (low-wage folks over there). Unfortunately, I think that you have to go the full reducio ad absurdum route to get that far, but Pfizer's going to find out for us if that's an accurate reading.

What these economists are also talking about is, I'd say, the next step beyond Moore's Law: once we have all this processing power, how do we use it? The first wave of computation-driven change happened because of the easy answers to that question: we had a lot of number-crunching that was being done by hand, or very slowly by some route, and we now had machines that could do what we wanted to do more quickly. This newer wave, if wave it is, will be driven more by software taking advantage of the hardware power that we've been able to produce.

The first wave didn't revolutionize drug discovery in the way that some people were hoping for. Sheer brute force computational ability is of limited use in drug discovery, unfortunately, but that's not always going to be the case, especially as we slowly learn how to apply it. If we really are starting to get better at computational pattern recognition and decision-making algorithms, where could that have an impact?

It's important to avoid what I've termed the "Andy Grove fallacy" in thinking about all this. I think that it is a result of applying first-computational-wave thinking too indiscriminately to drug discovery, which means treating it too much like a well-worked-out human-designed engineering process. Which it certainly isn't. But this second-wave stuff might be more useful.

I can think of a few areas: in early drug discovery, we could use help teasing patterns out of large piles of structure-activity relationship data. I know that there are (and have been) several attempts at doing this, but it's going to be interesting to see if we can do it better. I would love to be able to dump a big pile of structures and assay data points into a program and have it say the equivalent of "Hey, it looks like an electron-withdrawing group in the piperidine series might be really good, because of its conformational similarity to the initial lead series, but no one's ever gotten back around to making one of those because everyone got side-tracked by the potency of the chiral amides".

Software that chews through stacks of PK and metabolic stability data would be worth having, too, because there sure is a lot of it. There are correlations in there that we really need to know about, that could have direct relevance to clinical trials, but I worry that we're still missing some of them. And clinical trial data itself is the most obvious place for software that can dig through huge piles of numbers, because those are the biggest we've got. From my perspective, though, it's almost too late for insights at that point; you've already been spending the big money just to get the numbers themselves. But insights into human toxicology from all that clinical data, that stuff could be gold. I worry that it's been like the concentration of gold in seawater, though: really there, but not practical to extract. Could we change that?

All this makes me actually a bit hopeful about experiments like this one that I described here recently. Our ignorance about medicine and human biochemistry is truly spectacular, and we need all the help we can get in understanding it. There have to be a lot of important things out there that we just don't understand, or haven't even realized the existence of. That lack of knowledge is what gives me hope, actually. If we'd already learned what there is to know about discovering drugs, and were already doing the best job that could be done, well, we'd be in a hell of a fix, wouldn't we? But we don't know much, we're not doing it as well as we could, and that provides us with a possible way out of the fix we're in.

So I want to see as much progress as possible in the current pattern-recognition and data-correlation driven artificial intelligence field. We discovery scientists are not going to automate ourselves out of business so quickly as factory workers, because our work is still so hypothesis-driven and hard to define. (For a dissenting view, with relevance to this whole discussion, see here). It's the expense of applying the scientific method to human health that's squeezing us all, instead, and if there's some help available in that department, then let's have it as soon as possible.

Comments (32) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | In Silico | Pharmacokinetics | Toxicology

March 28, 2012

Winning Ugly and Failing Gracefully

Email This Entry

Posted by Derek

A recent discussion with colleagues turned around the question: "Would you rather succeed ugly or fail gracefully?" In drug discovery terms, that could be rephrased "Would you rather get a compound through the clinic after wrestling with a marginal structure, worrying about tox, having to fix the formulation three times, and so on, or would you rather work on something that everyone agrees is a solid target, with good chemical matter, SAR that makes sense, leading to a potent, selective, clean compound that dies anyway in Phase II?"

I vote for option number one, if those are my choices. But here's the question at the heart of a lot of the debates about preclinical criteria: do more programs like that die, or do more programs like option number two die? I tend to think that way back early in the process, when you're still picking leads, that you're better off with non-ugly chemical matter. We're only going to make it bigger and greasier, so start with as pretty a molecule as you can. But as things go on, and as you get closer to the clinic, you have to face up to the fact that no matter how you got there, no one really knows what's going to happen once you're in humans. You don't really know if your mechanism is correct (Phase II), and you sure don't know if you're going to see some sort of funny tox or long-term effect (Phase III). The chances of those are still higher if your compound is exceptionally greasy, so I think that everyone can agree that (other things being equal) you're better off with a lower logP. But what else can you trust? Not much.

The important thing is getting into the clinic, because that's where all the big questions are answered. And it's also where the big money is spent, so you have to be careful, on the other side of the equation, and not just shove all kinds of things into humans. You're going to run out of time and cash, most likely, before something works. But if you kill everything off before it gets that far, you're going to run out of both of those, too, for sure. You're going to have to take some shots at some point, and those will probably be with compounds that are less than ideal. A drug is a biologically active chemical compound that has things wrong with it.

There's another component to that "fail gracefully" idea, though, and it's a less honorable one. In a large organization, it can be to a person's advantage to make sure that everything's being done in the approved way, even if that leads off the cliff eventually. At least that way you can't be blamed, right? So you might not think that an inhibitor of Target X is such a great idea, but the committee that proposes new targets does, so you keep your head down. And you may wonder about the way the SAR is being prosecuted, but the official criteria say that you have to have at least so much potency and at least so much selectivity, so you do what you have to to make the cutoffs. And on it goes. In the end, you deliver a putative clinical candidate that may not have much of a chance at all, but that's not your department, because all the boxes got checked. More to the point, all the boxes were widely seen to be checked. So if it fails, well, it's just one of those things. Everyone did everything right, everyone met the departmental goals: what else can you do?

This gets back to the post the other day on unlikely-looking drug structures. There are a lot of them; I'll put together a gallery soon. But I think it's important to look these things over, and to realize that every one of them is out there on the market. They're on the pharmacy shelves because someone had the nerve to take them into the clinic, because someone was willing to win with an ugly compound. Looking at them, I realize that I would have crossed off billions of dollars just because I didn't feel comfortable with these structures, which makes me wonder if I haven't been overvaluing my opinion in these matters. You can't get a drug on the market without offending someone, and it may be you.

Comments (36) + TrackBacks (0) | Category: Drug Development | Life in the Drug Labs

March 27, 2012

Virtual Biotech, Like It or Not

Email This Entry

Posted by Derek

We've all been hearing for a while about "virtual biotechs". The term usually refers to a company with only a handful of employees and no real laboratory space of its own. All the work is contracted out. That means that what's left back at the tiny headquarters (which in a couple of cases is as small as one person's spare bedroom) is the IP. What else could it be? There's hardly any physical property at all. It's as pure a split as you can get between intellectual property (ideas, skills, actual patents) and everything else. Here's a 2010 look at the field in San Diego, and here's a more recent look from Xconomy. (I last wrote about the topic here).

Obviously, this gets easier to do earlier in the whole drug development process, where less money is involved. That said, there are difficulties at both ends. A large number of these stories seem to involve people who were at a larger company when it ran out of money, but still had some projects worth looking at. The rest of the cases seem to come out of academia. In other words, the ideas themselves (the key part of the whole business) were generated somewhere with more infrastructure and funding. Trying to get one of these off the ground otherwise would be a real bootstrapping problem.

And at the other end of the process, getting something all the way through the clinic like this also seems unlikely. The usual end point is licensing out to someone with more resources, as this piece from Xconomy makes clear:

In the meantime, one biotech model gaining traction is the single asset, infrastructure-lite, development model, which deploys modest amounts of capital to develop a single compound to an early clinical data package which can be partnered with pharma. The asset resides within an LLC, and following the license transaction, the LLC is wound down and distributes the upfront, milestone and royalty payments to the LLC members on a pro rata basis. The key to success in this model is choosing the appropriate asset/indication – one where it is possible to get to a clinical data package on limited capital. This approach excludes many molecules and indications often favored by biotech, and tends to drive towards clinical studies using biomarkers – directly in line with one of pharma’s favored strategies.

This is a much different model, of course, than the "We're going to have an IPO and become our own drug company!" one. But the chances of that happening have been dwindling over the years, and the current funding environment makes it harder than ever, Verastem aside. It's even a rough environment to get acquired in. So licensing is the more common path, and (as this FierceBiotech story says), that's bound to have an effect on the composition of the industry. People aren't holding on to assets for as long as they used to, and they're trying to get by with as little of their own money as they can. Will we end up with a "field of fireflies" model, with dozens, hundreds of tiny companies flickering on and off? What will the business look like after another ten years of this - better, or worse?

Comments (25) + TrackBacks (0) | Category: Business and Markets | Chemical News | Drug Development | Drug Industry History

March 26, 2012

What's the Ugliest Drug? Or The Ugliest Drug Candidate?

Email This Entry

Posted by Derek

I was having one of those "drug-like properties" discussions with colleagues the other day. Admittedly, if you're not in drug discovery yourself, you probably don't have that one very often, but even for us, you'd think that a lot of the issues would be pretty settled by now. Not so.

While everyone broadly agrees that compounds shouldn't be too large or too greasy, where one draws the line is always up for debate. And the arguments gets especially fraught in the earlier stages of a project, when you're still deciding on what chemical series to work on. One point of view (the one I subscribe to) says that almost every time, the medicinal chemistry process is going to make your compound larger and greasier, so you'd better start on the smaller and leaner side to give everyone room to work in. But sometimes, Potency Rules, at least for some people and in some organizations, and there's a lead which might be stretching some definitions but is just too active to ignore. (That way, in my experience, lies heartbreak, but there are people who've made successes out of it).

We've argued these same questions here before, more than once. What I'm wondering today is, what's the least drug-like drug that's made it? It's dangerous to ask that question, in a way, because it gives some people what they see as a free pass to pursue ugly chemical matter - after all, Drug Z made it, so why not this one? (That, to my mind, ignores the ars longa, vita brevis aspect: since there's an extra one-in-a-thousand factor with some compounds, given the long odds already, why would you make them even longer?)

But I think it's still worth asking the question, if we can think of what extenuating circumstances made some of these drugs successful. "Sure, your molecular weight isn't as high as Drug Z, which is on the market, but do you have Drug Z's active transport/distribution profile/PK numbers in mice? If not, just why do you think you're going to be so lucky?"

Antibiotics are surely going to make up some of the top ten candidates - some of those structures are just bizarre. There's a fairly recent oncology drug that I think deserves a mention for its structure, too. Anyone have a weirder example of a marketed drug?

What's still making its way through the clinic can be even stranger-looking. Some of the odder candidates I've seen recently have been for the hepatitis C proteins NS5A and NS5B. Bristol-Myers Squibb has disclosed some eye-openers, such as BMS-790052. (To be fair, that target seems to really like chemical matter like this, and the compound, last I heard, was moving along through the clinic.)

And yesterday, as Carmen Drahl reported from the ACS meeting in San Diego, the company disclosed the structure of BMS-791325, a compound targeting NS5B. That's a pretty big one, too - the series it came from started out reasonably, then became not particularly small, and now seems to have really bulked up, and for the usual reasons - potency and selectivity. But overall, it's a clear example of the sort of "compound bloat" that overtakes projects as they move on.

So, nominations are open for three categories: Ugliest Marketed Drug, Ugliest Current Clinical Candidate, and Ugliest Failed Clinical Candidate. Let's see how bad it gets!

Comments (58) + TrackBacks (0) | Category: Drug Development | Drug Industry History

March 19, 2012

Dealing with the Data

Email This Entry

Posted by Derek

So how do we deal with the piles of data? A reader sent along this question, and it's worth thinking about. Drug research - even the preclinical kind - generates an awful lot of information. The other day, it was pointed out that one of our projects, if you expanded everything out, would be displayed on a spreadsheet with compounds running down the left, and over two hundred columns stretching across the page. Not all of those are populated for every compound, by any means, especially the newer ones. But compounds that stay in the screening collection tend to accumulate a lot of data with time, and there are hundreds of thousands (or millions) of compounds in a good-sized screening collection. How do we keep track of it all?

Most larger companies have some sort of proprietary software for the job (or jobs). The idea is that you can enter a structure (or substructure) of a compound and find out the project it was made for, every assay that's been run on it, all its spectral data and physical properties (experimental and calculated), every batch that's been made or bought (and from whom and from where, with notebook and catalog references), and the bar code of every vial or bottle of it that's running around the labs. You obviously don't want all of those every time, so you need to be able to define your queries over a wide range, setting a few common ones as defaults and customizing them for individual projects while they're running.

Displaying all this data isn't trivial, either. The good old fashioned spreadsheet is perfectly useful, but you're going to need the ability to plot and chart in all sorts of ways to actually see what's going on in a big project. How does human microsomal stability relate to the logP of the right-hand side chain in the pyrimidinyl-series compounds with molecular weight under 425? And how do those numbers compare to the dog microsomes? And how do either of those compare to the blood levels in the whole animal, keeping in mind that you've been using two different dosing vehicles along the way? To visualize these kinds of questions - perfectly reasonable ones, let me tell you - you'll need all the help you can get.

You run into the problem of any large, multifunctional program, though: if it can do everything, it may not do any one thing very well. Or there may be a way to do whatever you want, if only you can memorize the magic spell that will make it happen. If it's one of those programs that you have to use constantly or run the risk of totally forgetting how it goes, there will be trouble.

So what's been the experience out there? In-house home-built software? Adaptations of commercial packages? How does a smaller company afford to do what it needs to do? Comments welcome. . .

Comments (66) + TrackBacks (0) | Category: Drug Assays | Drug Development | Life in the Drug Labs

March 16, 2012

Merck's CALIBR Venture

Email This Entry

Posted by Derek

So the news is that Merck is now going to start its own nonprofit drug research institute in San Diego: CALIBR, the California Institute for Biomedical Research. It'll be run by Peter Schultz of Scripps, and they're planning to hire about 150 scientists (which is good news, anyway, since the biomedical employment picture out in the San Diego area has been grim).

Unlike the Centers for Therapeutic Innovation that Pfizer, a pharmaceutical company based in New York, has established in collaboration with specific academic medical centres around the country, Calibr will not be associated with any particular institution. (Schultz, however, will remain at Scripps.) Instead, academics from around the world can submit research proposals, which will then be reviewed by a scientific advisory board, says Kim. The institute itself will be overseen by a board of directors that includes venture capitalists. Calibr will not have a specific therapeutic focus.

Merck, meanwhile, will have the option of an exclusive licence on any proteins or small-molecule therapeutics to emerge. . .

They're putting up $90 million over the next 7 years, which isn't a huge amount. It's not clear if they have any other sources of funding - they say that they'll "access" such, but I have to wonder, since that would presumably complicate the IP for Merck. It's also not clear what they'll be working on out there; the press release is, well, a press release. The general thrust is translational research, a roomy category, and they'll be taking proposals from academic labs who would like to use their facilities and expertise.

So is this mainly a way for Merck to do more academic collaborations without the possible complications (for universities) of dealing directly with a drug company? Will it preferentially take on high-risk, high-reward projects? There's too little to go on yet. Worth watching with interest as it gets going - and if any readers find themselves interviewing there, please report back!

Comments (47) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development

March 15, 2012

Not Quite So Accelerated, Says PhRMA

Email This Entry

Posted by Derek

We've spent a lot of time here talking about provisional approval of drugs, most specifically Avastin (when its approval for metastatic breast cancer was pulled). But the idea isn't to put drugs on the market that have to be taken back; it's to get them out more quickly in case they actually work.

There's legislation (the TREAT Act) that is attempting to extend the range of provisional approvals. But according to this column by Avik Roy, an earlier version of the bill went much further: it would have authorized new approval pathways for the first drugs to treat specific subpopulations of an existing disease, nonresponders to existing therapies, compounds with demonstrable improvements in safety or efficacy, or (in general) compounds that "otherwise satisfy an unmet medical need". As with the existing accelerated approval process, drugs under these categories could (after negotiation with the FDA) be provisionally marketed after Phase II results, if those were convincing enough, with possible revocation after Phase III results came in.

Unlike the various proposals to put compounds on the market after Phase I (which I fear would be an invitation to game the system), this one strikes me as aggressive but sensible. It would, ideally, encourage companies to run more robust Phase II trials in the hopes of going straight to the market, and it would allow really outstanding drugs a chance to start earning back their R&D costs much earlier. As long as everyone understood that Phase III trials are no slam dunk any more (if they ever were), and that some of these drugs would turn out not to be as good as they looked, I think that on balance, everyone would come out ahead.

According to Roy, this version of the bill had (as you'd expect) attracted strong backers and strong opponents. On the "pro" side was BIO, the biotech industry group, which is no surprise. On the "anti" side, the FDA itself wasn't ready for this big a change, which isn't much of a shock, either. (To be fair to them, this would increase their workload substantially - you'd really want to couple a reform like this with more people on their end). And there were advocacy groups that worried that this new regulatory regime would water down drug safety requirements too much. The article doesn't name any groups, but anyone who's observed the industry can fill in some likely names.

But there was another big group opposing the change: PhRMA. Yes, the trade organization for the large drug companies. Opinions vary as to the reason. The official explanations are that they, too, were concerned for patient safety, and they wanted the PDUFA legislation renewed as is, without these extra provisions (a "bird in the hand" argument). But Roy's piece advances a less charitable thesis:

Sen. Hagan’s proposal would have been devastating to the big pharma R&D oligopoly. If small biotech companies could get their drugs tentatively approved after inexpensive phase II studies, they would have far less need to partner those drugs with big pharma. They could keep the upside themselves and attract far more interest from investors. Big pharma, on the other hand, would be without its largest source for innovative new medicines: the small biotech farm team.

I'd like to be able to doubt this reasoning more than I do. . .

Comments (19) + TrackBacks (0) | Category: Drug Development | Regulatory Affairs

March 14, 2012

The Blackian Demon of Drug Discovery

Email This Entry

Posted by Derek

There's an on-line appendix to that Nature Reviews Drug Discovery article that I've been writing about, and I don't think that many people have read it yet. Jack Scannell, one of the authors, sent along a note about it, and he's interested to see what the readership here makes of it.

It gets to the point that came up in the comments to this post, about the order that you do your screening assays in (see #55 and #56). Do you run everything through a binding assay first, or do you run things through a phenotypic assay first and then try to figure out how they bind? More generally, with either sort of assay, is it better to do a large random screen first off, or is it better to do iterative rounds of SAR from a smaller data set? (I'm distinguishing those two because phenotypic assays provide very different sorts of data density than do focused binding assays).

Statistically, there's actually a pretty big difference there. I'll quote from the appendix:

Imagine that you know all of the 600,000 or so words in the English language and that you are asked to guess an English word written in a sealed envelope. You are offered two search strategies. The first is the familiar ‘20 questions’ game. You can ask a series of questions. You are provided with a "yes" or "no" answer to each, and you win if you guess the word in the envelope having asked 20 questions or fewer. The second strategy is a brute force method. You get 20,000 guesses, but you only get a "yes" or "no" once you have made all 20,000 guesses. So which is more likely to succeed, 20 questions or 20,000 guesses?

A skilled player should usually succeed with 20 questions (since 600,000 is less than than 2^20) but would fail nearly 97% of the time with "only" 20,000 guesses.

Our view is that the old iterative method of drug discovery was more like 20 questions, while HTS of a static compound library is more like 20,000 guesses. With the iterative approach, the characteristics of each molecule could be measured on several dimensions (for example, potency, toxicity, ADME). This led to multidimensional structure–activity relationships, which in turn meant that each new generation of candidates tended to be better than the previous generation. In conventional HTS, on the other hand, search is focused on a small and pre-defined part of chemical space, with potency alone as the dominant factor for molecular selection.

Aha, you say, but the game of twenty questions is equivalent to running perfect experiments each time: "Is the word a noun? Does it have more than five letters?" and so on. Each question carves up the 600,000 word set flawlessly and iteratively, and you never have to backtrack. Good experimental design aspires to that, but it's a hard standard to reach. Too often, we get answers that would correspond to "Well, it can be used like a noun on Tuesdays, but if it's more than five letters, then that switches to Wednesday, unless it starts with a vowel".

The authors try to address this multi-dimensionality with a thought experiment. Imagine chemical SAR space - huge number of points, large number of parameters needed to describe each point.

Imagine we have two search strategies to find the single best molecule in this space. One is a brute force search, which assays a molecule and then simply steps to the next molecule, and so exhaustively searches the entire space. We call this "super-HTS". The other, which we call the “Blackian demon” (in reference to the “Darwinian demon”, which is used sometimes to reflect ideal performance in evolutionary thought experiments, and in tribute to James Black, often acknowledged as one of the most successful drug discoverers), is equivalent to an omniscient drug designer who can assay a molecule, and then make a single chemical modification to step it one position through chemical space, and who can then assay the new molecule, modify it again, and so on. The Blackian demon can make only one step at a time, to a nearest neighbour molecule, but it always steps in the right direction; towards the best molecule in the space. . .

The number of steps for the Blackian demon follows from simple geometry. If you have a d dimensional space with n nodes in the space, and – for simplicity – these are arranged in a neat line, square, cube, or hypercube, you can traverse the entire space, from corner to corner with d x (n^(1/d)-1) steps. This is because each vertex is n nodes in length, and there are d vertices. . .When the search space is high dimensional (as is chemical space) and there is a very large number of nodes (as is the case for drug-like molecules), the Blackian demon is many orders of magnitude more efficient than super-HTS. For example, in a 10 dimensional space with 10^40 molecules, the Blackian demon can search the entire space in 10^5 steps (or less), while the brute force method requires 10^40 steps.

These are idealized cases, needless to say. One problem is that none of us are exactly Blackian demons - what if you don't always make the right step to the next molecule? What if your iteration only gives one out of ten molecules that get better, or one out of a hundred? I'd be interested to see how that affects the mathematical argument.

And there's another conceptual problem: for many points in chemical space, the numbers are even much more sparse. One assumption with this thought experiment (correct me if I'm wrong) is that there actually is a better node to move to each time. But for any drug target, there are huge regions of flat, dead, inactive, un-assayable chemical space. If you started off in one of those, you could iterate until your hair fell out and never get out of the hole. And that leads to another objection to the ground rules of this exercise: no one tries to optimize by random HTS. It's only used to get starting points for medicinal chemists to work on, to make sure that they're not starting in one of those "dead zones". Thoughts?

Comments (45) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

March 12, 2012

The Brute Force Bias

Email This Entry

Posted by Derek

I wanted to return to that Nature Reviews Drug Discovery article I blogged about the other day. There's one reason the authors advance for our problems that I thought was particularly well stated: what they call the "basic research/brute force" bias.

The ‘basic research–brute force’ bias is the tendency to overestimate the ability of advances in basic research (particularly in molecular biology) and brute force screening methods (embodied in the first few steps of the standard discovery and preclinical research process) to increase the probability that a molecule will be safe and effective in clinical trials. We suspect that this has been the intellectual basis for a move away from older and perhaps more productive methods for identifying drug candidates. . .

I think that this is definitely a problem, and it's a habit of thinking that almost everyone in the drug research business has, to some extent. The evidence that there's something lacking has been piling up. As the authors say, given all the advances over the past thirty years or so, we really should have seen more of an effect in the signal/noise of clinical trials: we should have had higher success rates in Phase II and Phase III as we understood more about what was going on. But that hasn't happened.

So how can some parts of a process improve dramatically, yet important measures of overall performance remain flat or decline? There are several possible explanations, but it seems reasonable to wonder whether companies industrialized the wrong set of activities. At first sight, R&D was more efficient several decades ago , when many research activities that are today regarded as critical (for example, the derivation of genomics-based drug targets and HTS) had not been invented, and when other activities (for example, clinical science, animal-based screens and iterative medicinal chemistry) dominated.

This gets us back to a topic that's come up around here several times: whether the entire target-based molecular-biology-driven style of drug discovery (which has been the norm since roughly the early 1980s) has been a dead end. Personally, I tend to think of it in terms of hubris and nemesis. We convinced ourselves that were were smarter than we really were.

The NRDD piece has several reasons for this development, which also ring true. Even in the 1980s, there were fears that the pace of drug discovery was slowing. and a new approach was welcome. A second reason is a really huge one: biology itself has been on a reductionist binge for a long time now. And why not? The entire idea of molecular biology has been incredibly fruitful. But we may be asking more of it than it can deliver.

. . .the ‘basic research–brute force’ bias matched the scientific zeitgeist, particularly as the older approaches for early-stage drug R&D seemed to be yielding less. What might be called 'molecular reductionism' has become the dominant stream in biology in general, and not just in the drug industry. "Since the 1970s, nearly all avenues of biomedical research have led to the gene". Genetics and molecular biology are seen as providing the 'best' and most fundamental ways of understanding biological systems, and subsequently intervening in them. The intellectual challenges of reductionism and its necessary synthesis (the '-omics') appear to be more attractive to many biomedical scientists than the messy empiricism of the older approaches.

And a final reason for this mode of research taking over - and it's another big one - is that it matched the worldview of many managers and investors. This all looked like putting R&D on a more scientific, more industrial, and more manageable footing. Why wouldn't managers be attracted to something that looked like it valued their skills? And why wouldn't investors be attracted to something that looked as if it could deliver more predictable success and more consistent earnings? R&D will give you gray hairs; anything that looks like taming it will find an audience.

And that's how we find ourselves here:

. . .much of the pharmaceutical industry's R&D is now based on the idea that high-affinity binding to a single biological target linked to a diseases will lead to medical benefit in humans. However, if the causal link between single targets and disease states is weaker than commonly thought, or if drugs rarely act on a single target, one can understand why the molecules that have been delivered by this research strategy into clinical development may not necessarily be more likely to succeed than those in earlier periods.

That first sentence is a bit terrifying. You read it, and part of you thinks "Well, yeah, of course", because that is such a fundamental assumption of almost all our work. But what if it's wrong? Or just not right enough?

Comments (64) + TrackBacks (0) | Category: Drug Development | Drug Industry History

March 6, 2012

Drug Discovery for Physicists

Email This Entry

Posted by Derek

There's a good post over at the Curious Wavefunction on the differences between drug discovery and the more rigorous sciences. I particularly liked this line:

The goal of many physicists was, and still is, to find three laws that account for at least 99% of the universe. But the situation in drug discovery is more akin to the situation in finance described by the physicist-turned-financial modeler Emanuel Derman; we drug hunters would consider ourselves lucky to find 99 laws that describe 3% of the drug discovery universe.

That's one of the things that you get used to in this field, but when you step back, it's remarkable: so much of what we do remains relentlessly empirical. I don't just mean finding a hit in a screening assay. It goes all the way through the process, and the further you go, the more empirical it gets. Cell assays surprise you compared to enzyme preps, and animals are a totally different thing than cells. Human clinical trials are the ultimate in empirical data-gathering: there's no other way to see if a drug is truly safe (or effective) in humans other than giving it to a whole big group of humans. We do all sorts of assays to avoid getting to that stage, or to feel more confident when we're about to make it there, but there's no substituted for actually doing it.

There's a large point about reductionism to be made, too:

Part of the reason drug discovery can be challenging to physicists is because they are steeped in a culture of reductionism. Reductionism is the great legacy of twentieth-century physics, but while it worked spectacularly well for particle physics it doesn't quite work for drug design. A physicist may see the human body or even a protein-drug system as a complex machine whose understandings we can completely understand once we break it down into its constituent parts. But the chemical and biological systems that drug discoverers deal with are classic examples of emergent phenomena. A network of proteins displays properties that are not obvious from the behavior of the individual proteins. . .Reductionism certainly doesn't work in drug discovery in practice since the systems are so horrendously complicated, but it may not even work in principle.

And there we have one of the big underlying issues that needs to be faced by the hardware engineers, software programmers, and others who come in asking why we can't be as productive as they are. There's not a lot of algorithmic compressibility in this business. Whether they know it or not, many other scientists and engineers are living in worlds where they're used to it being there when they need it. But you won't find much here.

Comments (22) + TrackBacks (0) | Category: Drug Assays | Drug Development

February 22, 2012

Scaling Up a Strange Dinitro Compound (And Others)

Email This Entry

Posted by Derek

I wrote here about a very unusual dinitro compound that's in the clinic in oncology. Now there's a synthetic chemistry follow-up, in the form of a paper in Organic Process R&D.
dinitro.png

It's safe to say that most process and scale-up chemists are never going to have to worry about making a gem-dinitroazetidine - or, for that matter, a gem-dinitroanything. But the issues involved are the same ones that come up over and over again. See if this rings any bells:

Gram quantities of (3) for initial anticancer screening were originally prepared by an unoptimized approach that was not suitable for scale-up and failed to address specific hazards of the reaction intermediates and coproducts. The success of (3) in preclinical studies prompted the need for a safe, reliable, and scalable synthesis to provide larger supplies of the active pharmaceutical ingredient (API) for further investigation and eventual clinical trials.

Yep, it's when you need large, reliable batches of something that the inadequacies of your chemistry really stand out. The kinds of chemistry that people like me do, back in the discovery labs, often has to be junked. It's fine for making 100mg of something to put in the archives - and tell me, when was the last time you put as much as 100 milligrams of a new compound into the archives? But there are usually plenty of weak points as you try to go to gram, then hundreds of grams, then kilos and up. Among them are:

(1) Exothermic chemistry. Excess heat is easy to shed from a 25-mL round-bottom flask. Heat is not so easily lost from larger vessels, though, and the number of chemists who have had to discover this the hard way is beyond counting. The world is very different when everything in the flask is no longer just 1 cm away from a cold glass wall.

(2) Stirring. This can be a pain even on the small scale, so imagine what a headache it is by the kilo. Gooey precipitates, thick milkshake-like reactions, lumps of crud - what's inconvenient when small can turn into a disaster later on, because poor stirring leads to localized heating (see above), incomplete reactions, side products, and more.

(3) Purification. Just run it down a column? Not so fast, chief. Where, exactly, do you find the columns to run kilos of material across? And the pumps to force the stuff through? And the wherewithal to dispose of all that solid-phase stuff once you've turned it all those colors and it can't be used again? And the time and money to evaporate all that solvent that you're using? No, the scale-up people will go a long way to avoid chromatography. Precipitations and crystallizations are the way to go, if at all possible.

Reproducibility. All of these factors influence this part. One of the most important things about a good chemical process is that it works the same flippin' way every single time. As has been said before around here, a route that generates 97% yield most of the time, but with an occasional mysterious 20% flop, is useless. Worse than useless. Squeezing the mystery out of the synthesis is the whole point of process chemistry: you want to know what the side products are, why they form, and how to control every variable.

Oh yeah. Cost.Cost-of-goods is rarely a deal-breaker in drug research, but that's partly because people are paying attention to it. In the med-chem labs, we think nothing of using exotic reagents that the single commercial supplier marks up to the sky. That will not fly on scale. Cutting out three steps with a reagent that isn't obtainable in quantity doesn't help the scale-up people one bit. (The good news is that some of these things turn out to be available when someone really wants them - the free market in action).

There are other factors, but those are some of the main ones. It's a different world, and it involves thinking about things that a discovery chemist just never thinks about. (Does your product tend to create a fine dust on handling? The sort that might fill a room and explode with static electricity sparks? Can your reaction mixture be pumped through a pipe as a slurry, or not? And so on.) It looks as if the dinitro compound has made it through this gauntlet successfully, but every day, there's someone at some drug company worrying about the next candidate.

Comments (19) + TrackBacks (0) | Category: Drug Development | Life in the Drug Labs

February 10, 2012

The Terrifying Cost of a New Drug

Email This Entry

Posted by Derek

Matthew Herper at Forbes has a very interesting column, building on some data from Bernard Munos (whose work on drug development will be familiar to readers of this blog). What he and his colleague Scott DeCarlo have done is conceptually simple: they've gone back over the last 15 years of financial statements from a bunch of major drug companies, and they've looked at how many drugs each company has gotten approved.

Over that long a span, things should even out a bit. There will be some spending which won't show up in the count, that took place on drugs that got approved during the earlier part that span, but (on the back end) there's spending on drugs in there that haven't made it to market yet, too. What do the numbers look like? Hideous. Appalling. Unsustainable.

AstraZeneca, for example, got 5 drugs on the market during this time span, the worst performance on this list, and thus spent spent nearly $12 billion dollars per drug. No wonder they're in the shape they're in. GSK, Sanofi, Roche, and Pfizer all spent in the range of $8 billion per approved drug. Amgen did things the cheapest by this measure, 9 drugs approved at about 3.7 billion per drug.

Now, there are several things to keep in mind about these numbers. First - and I know that I'm going to hear about this from some people - you might assume that different companies are putting different things under the banner of R&D for accounting purposes. But there's a limit to how much of that you can do. Remember, there's a separate sales and marketing budget, too, of course, and people never get tired of pointing out that it's even larger than the R&D one. So how inflated can these figures be? Second, how can these numbers jibe with the 800-million-per-new-drug (recently revised to $1 billion), much less with the $43 million per new drug figure (from Light and Warburton) that was making the rounds a few months ago?

Well, I tried to dispose of that last figure at the time. It's nonsense, and if it were true, people would be lining up to start drug companies (and other people would be throwing money at them to help). Meanwhile, the drug companies that already exist wouldn't be frantically firing thousands of people and selling their lab equipment at auction. Which they are. But what about that other estimate, the Tufts/diMasi one? What's the difference?

As Herper rightly says, the biggest factor is failure. The Tufts estimate is for the costs racked up by one drug making it through. But looking at the whole R&D spend, you can see how money is being spent for all the stuff that doesn't get through. And as I and many of the other readers of this blog can testify, there's an awful lot of it. I'm now in my 23rd year of working in this industry, and nothing I've touched has ever made it to market yet. If someone wins $500 from a dollar slot machine, the proper way to figure the costs is to see how many dollars, total, they had to pump into the thing before they won - not just to figure that they spent $1 to win. (Unless, of course, they just sat down, and in this business we don't exactly have that option).

No, these figures really show you why the drug business is in the shape it's in. Look at those numbers, and look at how much a successful drug brings in, and you can see that these things don't always do a very good job of adding up. That's with the expenses doing nothing but rising, and the success rate for drug discovery going in the other direction, too. No one should be surprised that drug prices are rising under these conditions. The surprise is that there are still people out there trying to discover drugs.

Comments (62) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices

January 26, 2012

Putting a Number on Chemical Beauty

Email This Entry

Posted by Derek

There's a new paper out in Nature Chemistry called "Quantifying the Chemical Beauty of Drugs". The authors are proposing a new "desirability score" for chemical structures in drug discovery, one that's an amalgam of physical and structural scores. To their credit, they didn't decide up front which of these things should be the miost important. Rather, they took eight properties over 770 well-known oral drugs, and set about figuring how much to weight each of them. (This was done, for the info-geeks among the crowd, by calculating the Shannon entropy for each possibility to maximize the information contained in the final model). Interestingly, this approach tended to give zero weight to the number of hydrogen-bond acceptors and to the polar surface area, which suggests that those two measurements are already subsumed in the other factors.

And that's all fine, but what does the result give us? Or, more accurately, what does it give us that we haven't had before? After all, there have been a number of such compound-rating schemes proposed before (and the authors, again to their credit, compare their new proposal with the others head-to-head). But I don't see any great advantage. The Lipinski "Rule of 5" is a pretty simple metric - too simple for many tastes - and what this gives you is a Rule of 5 with both categories smeared out towards each other to give some continuous overlap. (See the figure below, which is taken from the paper). That's certainly more in line with the real world, but in that real world, will people be willing to make decisions based on this method, or not?
QED%20paper%20chart%20png.png
The authors go for a bigger splash with the title of the paper, which refers to an experiment they tried. They had chemists across AstraZeneca's organization assess some 17,000 compounds (200 or so for each) with a "Yes/No" answer to "Would you undertake chemistry on this compound if it were a hit?" Only about 30% of the list got a "Yes" vote, and the reasons for rejecting the others were mostly "Too complex", followed closely by "Too simple". (That last one really makes me wonder - doesn't AZ have a big fragment-based drug design effort?) Note also that this sort of experiment has been done before.

Applying their model, the mean score for the "Yes" compounds was 0.67 (s.d.0.16), and the mean score for the "No" compounds was 0.49 (s.d. 0.23, which they say was statistically significant, although that must have been a close call. Overall, I wouldn't say that this test has an especially strong correlation with medicinal chemists' ideas of structural attractiveness, but then, I'm not so sure of the usefulness of those ideas to start with. I think that the two ends of the scale are hard to argue with, but there's a great mass of compounds in the middle that people decide that they like or don't like, without being able to back up those statements with much data. (I'm as guilty as anyone here).

The last part of the paper tries to extend the model from hit compounds to the targets that they bind to - a druggability assessment. The authors looked through the ChEMBL database, and ranked the various target by the scores of the ligands that are associated with them. They found that their mean ligand score for all the targets in there is 0.478. For the targets of approved drugs, it's 0.492, and for the orally active ones it's 0.539 - so there seems to be a trend, although if those differences reached statistical significance, it isn't stated in the paper.

So overall, I find nothing really wrong with this paper, but nothing spectacularly right with it, either. I'd be interested in hearing other calls on it as it gets out into the community. . .

Comments (22) + TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico | Life in the Drug Labs

January 18, 2012

Fun With Epigenetics

Email This Entry

Posted by Derek

If you've been looking around the literature over the last couple of years, you'll have seen an awful lot of excitement about epigenetic mechanisms. (Here's a whole book on that very subject, for the hard core). Just do a Google search with "epigenetic" and "drug discovery" in it, any combination you like, and then stand back. Articles, reviews, conferences, vendors, journals, startups - it's all there.

Epigenetics refers to the various paths - and there are a bunch of them - to modify gene expression downstream of just the plain ol' DNA sequence. A lot of these are, as you'd imagine, involved in the way that the DNA itself is wound (and unwound) for expression. So you see enzymes that add and remove various switches to the outside of various histone proteins. You have histone acyltransferases (HATs) and histone deacetylases (HDACs), methyltransferases and demethylases, and so on. Then there are bromodomains (the binding sites for those acetylated histones) and several other mechanisms, all of which add up to plenty o' drug targets.

Or do they? There are HDAC compounds out there in oncology, to be sure, and oncology is where a lot of these other mechanisms are being looked at most intensively. You've got a good chance of finding aberrant protein expression levels in cancer cells, you have a lot of unmet medical need, a lot of potential different patient populations, and a greater tolerance for side effects. All of that argues for cancer as a proving ground, although it's certainly not the last word. But in any therapeutic area, people are going to have to wrestle with a lot of other issues.

Just looking over the literature can make you both enthusiastic and wary. There's an awful lot of regulatory machinery in this area, and it's for sure that it isn't there for jollies. (You'd imagine that selection pressure would operate pretty ruthlessly at the level of gene expression). And there are, of course, an awful lot of different genes whose expression has to be regulated, at different levels, in different cell types, at different phases of their development, and in response to different environmental signals. We don't understand a whole heck of a lot of the details.

So I think that there will be epigenetic drugs coming out of this burst of effort, but I don't think that they're going to exactly be the most rationally designed things we've ever seen. That's fine - we'll take drug candidates where we can get them. But as for when we're actually going to understand all these gene regulation pathways, well. . .

Comments (15) + TrackBacks (0) | Category: Biological News | Cancer | Drug Development

January 16, 2012

Biogen: A "Decimated" Pipeline?

Email This Entry

Posted by Derek

You don't want coverage like this: "Biogen CEO Tries to Refill Early-Stage Pipeline He Decimated". That would be George Scanos:

. . .Scangos and his research chief eliminated about 17 early-stage drug projects in 2010 and last year to hone the company's focus, leaving it with only about four early-stage compounds. Biogen exited oncology and cardiovascular research and is now targeting drugs to treat neurological and autoimmune conditions. . .

"We didn't want to fund projects that were unlikely to generate value," Scangos said in an interview on the sidelines of the J.P. Morgan health-care conference in San Francisco this week. . .But even if Biogen's late-stage pipeline delivers successful new drugs soon, the company needs more compounds in early-stage testing to sustain long-term growth. So it is licensing drugs from other companies. . .

The article itself (from Peter Loftus, originally in the Wall Street Journal isn't quite as harsh as the headline. As as that excerpt shows, part of the problem is that Scanos thought that the company was in some therapeutic areas that they shouldn't have been in at all, so that pipeline he's refilling isn't exactly the same one he cleared out. (And a note to the WSJ headline writers: "decimated" isn't a synonym for "got rid of a lot", although that horse, I fear, left the barn a long time ago. The mental image of decimating a pipeline isn't the sharpest vision ever conjured up by a headline, either, but I understand that these things are done on deadline.)

No, if I had to pick the biggest expensive reversal done under Biogen's new management, I'd pick the construction site a few blocks from here where they're putting up the company's new Cambridge headquarters. Those are the offices that used to be in. . .well, Cambridge, until former CEO Jim Mullen moved them out to Weston just a couple of years ago. I don't know how long it's going to take them to finish those buildings (right now, they're just past the bare-ground stage), but maybe eventually they can all work there for a few months before someone else decides to move them to Northhampton, Nashua, or Novosibirsk.

Comments (20) + TrackBacks (0) | Category: Drug Development

January 12, 2012

Welcome To the Jungle! Here's Your Panther.

Email This Entry

Posted by Derek

English has no word of its own for schadenfreude, so we've had to appropriate the German one, and we're in the process of making it our own - just as we did with "kindergarten", not to mention "ketchup" and "pyjamas", among fifty zillion more. That's because the emotion is not peculiar to German culture, oh no. We can feel shameful joy at others' discomfort with the best of them - like, for example, when people start to discover from experience just how hard drug discovery really is.

John LaMattina has an example over at Drug Truths. Noting the end of a research partnership between Eli Lilly and the Indian company Zydus Cadila, he picked up on this language:

“Developing a new drug from scratch is getting more expensive due to increased regulatory scrutiny and high costs of clinical trials. Lowering costs through a partnership with an Indian drug firm was one way of speeding up the process, but the success rate has not been very high.”

And that, as he correctly notes, is no slam on the Indian companies involved, just as it won't be one on the Chinese companies when they run into the same less-than-expected returns. No, the success rate has not been very high anywhere. Going to India and China might cut your costs a bit (although that window is slowly closing as we watch), but for early-stage research, the costs are not the important factor.

Everything we do in preclinical is a roundoff error compared to a big Phase III trial, as far as direct costs go. What we early-stage types specialize in, God help us, are opportunity costs, and those don't get reported on the quarterly earnings statements. There's no GAAP way to handle the cost of going for the wrong series of lead compounds on the way to the clinic, starting a program on the wrong target entirely, or not starting one instead on something that would have actually panned out. These are the big decisions in early stage research, and they're all judgment calls based on knowledge that is always incomplete. You will not find the answers to the questions just by going to Shanghai or Bangalore. The absolute best you can hope for is to spend a bit less money while searching for them, and thus shave some dollars off what is the smallest part of your R&D budget to start with. Sound like a good deal?

Relative to the other deals on offer, it might just be worthwhile. Such is the state of things, and such are the savings that people are willing to reach for. But when you're in the part of drug discovery that depends on feeling your way into unknown territory - the crucial part - you shouldn't expect any bargains.

Comments (18) + TrackBacks (0) | Category: Business and Markets | Drug Development

January 6, 2012

Do We Believe These Things, Or Not?

Email This Entry

Posted by Derek

Some of the discussions that come up here around clinical attrition rates and compound properties prompts me to see how much we can agree on. So, are these propositions controversial, or not?

1. Too many drugs fail in clinical trials. We are having a great deal of trouble going on with these failure rates, given the expense involved.

2. A significant number of these failures are due to lack of efficacy - either none at all, or not enough.

2a. Fixing efficacy failures is hard, since it seems to require deeper knowledge, case-by-case, of disease mechanisms. As it stands, we get a significant amount of this knowledge from our drug failures themselves.

2b. Better target selection without such detailed knowledge is hard to come by. Good phenotypic assays are perhaps the only shortcut, but a good phenotypic assays are not easy to develop and validate.

3. Outside of efficacy, a significant number of clinical failures are also due to side effects/toxicity. These two factors (efficacy and tox) account for the great majority of compounds that drop out of the clinic.

3a. Fixing tox/side effect failures through detailed knowledge is perhaps hardest of all, since there are a huge number of possible mechanisms. There are far more ways for things to go wrong than there are for them to work correctly.

3b. But there are broad correlations between molecular structures and properties and the likelihood of toxicity. While not infallible, these correlations are strong enough to be useful, and we should be grateful for anything we can get that might diminish the possibility of later failure.

Example of such structural features are redox-active groups like nitros and quinones, which really are associated with trouble - not invariably, but enough to make you very cautious. More broadly, high logP values are also associated with trouble in development - not as strongly, but strong enough to be worth considering.

So, is everyone pretty much in agreement with these things? What I'm saying is that if you take a hundred aryl nitro compounds into development, versus a hundred that don't have such a group, the latter cohort of compounds will surely have a higher success rate. And if you take a hundred compounds with logP values of 1 to 3 into development, these will have a higher success rate than a hundred compounds, against the same targets, with logP of 4 to 6. Do we believe this, or not?

Comments (34) + TrackBacks (0) | Category: Drug Assays | Drug Development | Toxicology

January 5, 2012

Lead-Oriented Synthesis - What Might That Be?

Email This Entry

Posted by Derek

A new paper in Angewandte Chemie tries to open another front in relations between academic and drug industry chemists. It's from several authors at GSK-Stevenage, and it proposes something they're calling "Lead-Oriented Synthesis". So what's that?

Well, the paper itself starts out as a quick tutorial on the state and practice of medicinal chemistry. That's a good plan, since Angewandte Chemie is not primarily a med-chem journal (he said with a straight face). Actually, it has the opposite reputation, a forum where high-end academic chemistry gets showplaced. So the authors start off by reminded the readership what drug discovery entails. And although we've had plenty of discussions around here about these topics, I think that most people can agree on the main points laid out:

1. Physical properties influence a drug's behavior.
2. Among those properties, logP may well be the most important single descriptor,
3. Most successful drugs have logP values between 1 and perhaps 4 or 5. Pushing the lipophilicity end of things is, generally speaking, asking for trouble.
4. Since optimization of lead compounds almost always adds molecular weight, and very frequently adds lipophilicity, lead compounds are better found in (and past) the low ends of these property ranges, to reduce the risk of making an unwieldy final compound.

As the authors take pains to say, though, there are many successful drugs that fall outside these ranges. But many of those turn out to have some special features - antibacterial compounds (for example) tend to be more polar outliers, for reasons that are still being debated. There is, though, no similar class of successful less polar than usual drugs, to my knowledge. If you're starting a program against a target that you have no reason to think is an outlier, and assuming you want an oral drug for it, then your chances for success do seem to be higher within the known property ranges.

So, overall, the GSK folks maintain that lead compounds for drug discovery are most desirable with logP values between -1 and 3, molecular weights from around 200 to 350, and no problematic functional groups (redox-active and so on). And I have to agree; given the choice, that's where I'd like to start, too. So why are they telling all this to the readers of Angewandte Chemie? Because these aren't the sorts of compounds that academic chemists are interested in making.

For example, a survey of the 2009 issues of the Journal of Organic Chemistry found about 32,700 compounds indexed with the word "preparation" in Chemical Abstracts, after organometallics, isotopically labeled compounds, and commercially available ones were stripped out. 60% of those are outside the molecular weight criteria for lead-like compounds. Over half the remainder fail cLogP, and most of the remaining ones fail the internal GSK structural filters for problematic functional groups. Overall, only about 2% of the JOC compounds from that year would be called "lead-like". A similar analysis across seven other synthetic organic journals led to almost the same results.

Looking at array/library synthesis, as reported in the Journal of Combinatorial Chemistry and from inside GSK's own labs, the authors quantify something else that most chemists suspected: the more polar structures tend to drop out as the work goes on. This "cLogP drift" seems to be due to incompatible chemistries or difficulties in isolation and purification, and this could also illustrate why many new synthetic methods aren't applied in lead-like chemical space: they don't work as well there.

So that's what underlies the call for "lead-oriented synthesis". This paper is asking for the development of robust reactions which will work across a variety of structural types, will be tolerant of polar functionalities, and will generate compounds without such potentially problematic groups as Michael acceptors, nitros, and the like. That's not so easy, when you actually try to do it, and the hope is that it's enough of a challenge to attract people who are trying to develop new chemistry.

Just getting a high-profile paper of this sort out into the literature could help, because it's something to reference in (say) grant applications, to show that the proposed research is really filling a need. Academic chemists tend, broadly, to work on what will advance or maintain their positions and careers, and if coming up with new reactions of this kind can be seen as doing that, then people will step up and try it. And the converse applies, too, and how: if there's no perceived need for it, no one will bother. That's especially true when you're talking about making molecules that are smaller than the usual big-and-complex synthetic targets, and made via harder-than-it-looks chemistry.

Thoughts from the industrial end of things? I'd be happy to see more work like this being done, although I think it' going to take more than one paper like this to get it going. That said, the intersection with popular fragment-based drug design ideas, which are already having an effect in the purely academic world of diversity-oriented synthesis, might give an extra impetus to all this.

Comments (34) + TrackBacks (0) | Category: Chemical News | Drug Assays | Drug Development | The Scientific Literature

December 22, 2011

More From Hua - A Change of Business Plans?

Email This Entry

Posted by Derek

You may remember the mention of Hua Pharmaceuticals here back in August, and the follow-up with details from the company. They're trying to in-license drugs from other companies and get them approved as quickly as possible in China. The original C&E News article made them sound wildly ambitious, while the company's own information just made them sound very ambitious.

Now we have some more information: Roche has licensed their glucokinase activator program (for diabetes) to Hua (that's a development effort I wrote about here). And that's an interesting development, because the Hua folks told me that:

"Hua Medicine intends to in-license patented drugs from the US and EU, and get them on the market and commercialized in the 4 year timeframe in China. This is about the average time it takes imported drugs (drugs that are approved and marketed in the US or EU but are coming newly into the Chinese market) to get approved by the SFDA in China."

And that's fine, but Roche's glucokinase activators haven't been approved or marketed anywhere yet. In fact, I'm not at all sure of the lead compound ever even made it to Phase III, so there's a lot of expensive work to be done yet, and on a groundbreaking mechanism, too. The only thing I can say is that approval in the US for diabetes drugs has gotten a lot harder over the years - the market is pretty well-served, for one thing, and the safety requirements (particularly cardiovascular) have gotten much more stringent. Perhaps these concerns are not so pressing in China, leading to an easier development path?

Easier or not, these compounds have a lot of time and money left to be put into them, which is not the sort of program that Hua seemed to be targeting before. One wonders if there just weren't any safer bets available. At any rate, good luck to them, and to their financial backers. Some will be needed; it always is.

Comments (8) + TrackBacks (0) | Category: Business and Markets | Diabetes and Obesity | Drug Development

December 13, 2011

The Sirtuin Saga

Email This Entry

Posted by Derek

Science has a long article detailing the problems that have developed over the last few years in the whole siturin story. That's a process that I've been following here as well (scrolling through this category archive will give you the tale), but this is a different, more personality-driven take. The mess is big enough to warrant a long look, that 's for sure:

". . .The result is mass confusion over who's right and who's wrong, and a high-stakes effort to protect reputations, research money, and one of the premier theories in the biology of aging. It's also a story of science gone sour: Several principals have dug in their heels, declined to communicate, and bitterly derided one another. . ."

As the article shows, one of the problems is that many of the players in this drama came out of the same lab (Leonard Guarente's at MIT), so there are issues even beyond the usual ones. Mentioned near the end of the article is the part of the story that I've spent more time on here, the founding of Sirtris and its acquisition by GlaxoSmithKline. It's safe to say that the jury is still out on that one - from all that anyone can tell from outside, it could still work out as a big diabetes/metabolism/oncology success story, or it could turn out to have been a costly (and arguably preventable) mistake. There are a lot of very strongly held opinions on both sides.

Overall, since I've been following this field from the beginning, I find the whole thing a good example of how tough it is to make real progress in fundamental biology. Here you have something that is (or at the very least has appeared to be) very interesting and important, studied by some very hard-working and intelligent people all over the world for years now, with expenditure of huge amounts of time, effort, and money. And just look at it. The questions of what sirtuins do, how they do it, and whether they can be the basis of therapies for human disease - and which diseases - are all still the subject of heated argument. Layers upon layers of difficulty and complexity get peeled back, but the onion looks to be as big as it ever was.

I'm going to relate this to my post the other day about the engineer's approach to biology. This sort of tangle, which differs only in degree and not in kind from many others in the field, illustrates better than anything else how far away we are from formalism. Find some people who are eager to apply modern engineering techniques to medical research, and ask them to take a crack at the sirtuins. Or the nuclear receptors. Or autoimmune disease, or schizophrenia therapies. Turn 'em loose on one of those problems, come back in a year, and see what color their remaining hair is.

Comments (9) + TrackBacks (0) | Category: Aging and Lifespan | Drug Development | Drug Industry History

December 9, 2011

Drugs, Airplanes, and Radios

Email This Entry

Posted by Derek

Wavefunction has a good post in response to this article, which speculates "If we designed airplanes the way we design drugs. . ." I think the original article is worth reading, but some - perhaps many - of its points are arguable. For example:

Every drug that fails in a clinical trial or after it reaches the market due to some adverse effect was “bad” from the day it was first drawn by the chemist. State-of-the-art in silico structure–property prediction tools are not yet able to predict every possible toxicity for new molecular structures, but they are able to predict many of them with good enough accuracy to eliminate many poor molecules prior to synthesis. This process can be done on large chemical libraries in very little time. Why would anyone design, synthesize, and test molecules that are clearly problematic, when so many others are available that can also hit the target? It would be like aerospace companies making and testing every possible rocket motor design rather than running the simulations that would have told them ahead of time that disaster or failure to meet performance specifications was inevitable for most of them.

This particular argument mixes up several important points which should remain separate. Would these simulations have predicted those adverse-effect failures the author mentions? Can they do so now, ex post facto? That would be a very useful piece of information, but in its absence I can't help but wonder if the tools he's talking about would have cheerfully passed Vioxx, or torcetrapib, or the other big failures of recent years. Another question to ask is how many currently successful drugs these tox simulations would have killed off - any numbers there?

The whole essay recalls Lazebnik's famous paper "Can A Biologist Fix A Radio?" (PDF). This is an excellent place to start if you want to explore what I've called the Andy Grove Fallacy. Lazebnik's not having any of the reasons I give for it being a fallacy - for example:

A related argument is that engineering approaches are not applicable to cells because these little wonders are fundamentally different from objects studied by engineers. What is so special about cells is not usually specified, but it is implied that real biologists feel the difference. I consider this argument as a sign of what I call the urea syndrome because of the shock that the scientific community had two hundred years ago after learning that urea can be synthesized by a chemist from inorganic materials. It was assumed that organic chemicals could only be produced by a vital force present in living organisms. Perhaps, when we describe signal transduction pathways properly, we would realize that their similarity to the radio is not superficial. . .

That paper goes on to call for biology to come up with some sort of formal language and notation to describe biochemical systems, something that would facilitate learning and discovery in the same way as circuit diagrams and the like. And that's a really interesting proposal on several levels: would that help? Is it even possible? If so, where to even start? Engineers, like the two authors of the papers I've quoted from, tend to answer "Yes", "Certainly", and "Start anywhere, because it's got to be more useful than what you people have to work with now". But I'm still not convinced.

I've talked about my reasons for this before, but let me add another one: algorithmic complexity. Fields more closely based on physics can take advantage of what's been called "the unreasonable effectiveness" of mathematics. And mathematics, and the principles of physics that can be stated in that form, give an amazingly compact and efficient description of the physical world. Maxwell's equations are a perfect example: there's classical electromagnetism for you, wrapped up into a beautiful little sculpture.

But biological systems are harder to reduce - much harder. There are so many nonlinear effects, so many crazy little things that can add up to so much more than you'd ever think. Here's an example - I've been writing about this problem for years now. It's very hard to imagine compressing these things into a formalism, at least not one that would be useful enough to save anyone time or effort.

That doesn't mean it isn't worth trying. Just the fact that I have trouble picturing something doesn't mean it can't exist, that's for sure. And I'd definitely like to be wrong about this one. But where to begin?

Comments (36) + TrackBacks (0) | Category: Drug Development | Drug Industry History

December 6, 2011

Riding to the Rescue of Rhodanines

Email This Entry

Posted by Derek

There's a new paper coming to the defense of rhodanines, a class of compound that has been described as "polluting the scientific literature". Industrial drug discovery people tend to look down on them, but they show up a lot, for sure.

This new paper starts off sounding like a call to arms for rhodanine fans, but when you actually read it, I don't think that there's much grounds for disagreement. (That's a phenomenon that's worth writing about sometime by itself - the disconnects between title/abstract and actual body text that occur in the scientific literature). As I see it, the people with a low opinion of rhodanines are saying "Look out! These things hit in a lot of assays, and they're very hard to develop into drugs!". And this paper, when you read the whole thing, is saying something like "Don't throw away all the rhodanines yet! They hit a lot of things, but once in a while one of them can be developed into a drug!" The argument is between people who say that elephants are big and people who say that they have trunks.

The authors prepared a good-sized assortment of rhodanines and similar heterocycles (thiohydantoins, hydantoins, thiazolidinediones) and assayed them across several enzymes. Only the ones with double-bonded sulfur (rhodanines and thiohydantoins) showed a lot of cross-enzyme potency - that group has rather unusual electronic properties, which could be a lot of the story. Here's the conclusion, which is what makes me think that we're all talking about the same thing:

We therefore think that rhodanines and related scaffolds should not be regarded as problematic or promiscuous binders per se. However, it is important to note that the intermolecular interaction profile of these scaffolds makes them prone to bind to a large number of targets with weak or moderate affinity. It may be that the observed moderate affinities of rhodanines and related compounds, e.g. in screening campaigns, has been overinterpreted in the past, and that these compounds have too easily been put forward as lead compounds for further development. We suggest that particularly strong requirements, i.e. affinity in the lower nanomolar range and proven selectivity for the target, are applied in the further assessment of rhodanines and related compounds. A generalized "condemnation" of these chemotypes, however, appears inadequate and would deprive medicinal chemists from attractive building blocks that possess a remarkably high density of intermolecular interaction points.

That's it, right there: the tendency to bind off-target, as noted by these authors, is one of the main reasons that these compounds are regarded with suspicion in the drug industry. We know that we can't test for everything, so when you have one of these structures, you're always fearful of what else it can do once it gets into an animal (or a human). Those downstream factors - stability, pharmacokinetics, toxicity - aren't even addressed in this paper, which is all about screening hits. And that's another source of the bad reputation, for industry people: too many times, people who aren't so worried about those qualities have screening commercial compound collections, come up with rhodanines, and published them as potential drug leads, when (as this paper illustrates), you have to be careful even using them as tool compounds. Given a choice, we'd just rather work on something else. . .

Comments (7) + TrackBacks (0) | Category: Drug Assays | Drug Development | The Scientific Literature

November 18, 2011

Pushing Onwards with CETP: The Big Money and the Big Risks

Email This Entry

Posted by Derek

Remember torcetrapib? Pfizer always will. The late Phase III failure of that CETP inhibitor wiped out their chances for an even bigger HDL-raising follow-up to LDL-lowering Lipitor, the world's biggest drug, and changed the future of the company in ways that are still being played out.

But CETP inhibition still makes sense, biochemically. And the market for increasing HDL levels is just as huge as it ever was, since there's still no good way to do it. Merck is pressing ahead with anacetrapib, Roche with dalcetrapib, and Lilly is out with recent data on evacetrapib. All three companies have tried to learn as much as they could from Pfizer's disaster, and are keeping a close eye on the best guesses for why it happened (a small rise in blood pressure and changes in aldosterone levels). So far, so good - but that only takes you so far. Those toxicological changes are reasonable, but they're only hypotheses for why torcetrapib showed a higher death rate in the drug treatment group than it did in the controls. And even that only takes you up to the big questions.

Which are: will raising HDL really make a difference in cardiovascular morbidity and mortality? And if so, is inhibiting CETP the right way to do it? Human lipidology is not nearly as well worked out as some people might think it is, and these are both still very open questions. But such drugs, and such trials, are the only way that we're going to find out the answers. All three companies are risking hundreds of millions of dollars (in an area that's already had one catastrophe) in an effort to find out, and (to be sure) in the hope of making billions of dollars if they're correct.

Will anyone make it through? Will they fail for tox like Pfizer did, telling us that we don't understand CETP inhibitors? Or will they make it past that problem, but not help patients as much as expected, telling us that we don't understand CETP itself, or HDL? Or will all three work as hoped, and arrive in time to split up the market ferociously, making none of them as profitable as the companies might have wanted? If you want to see what big-time drug development is like, I can't think of a better field to illustrate it.

Comments (17) + TrackBacks (0) | Category: Cardiovascular Disease | Drug Development | Toxicology

October 31, 2011

"You Guys Don’t Do Innovation. The iPad. That’s Innovative"

Email This Entry

Posted by Derek

Thoughts from Matthew Herper at Forbes about Steve Jobs, modern medicine, what innovation means, and why it can be so hard in some fields. This is relevant to this post and its precursors.

Comments (41) + TrackBacks (0) | Category: Drug Development | Who Discovers and Why

October 26, 2011

Francis Collins Speaks

Email This Entry

Posted by Derek

With all the recent talk about the NIH's translational research efforts, and the controversy about their drug screening efforts, this seems like a good time to note this interview with Francis Collins over at BioCentury TV. (It's currently the lead video, but you'll be able to find it in their "Show Guide" afterwards as well).

Collins says that they're not trying to compete with the private sector, but taking a look at the drug development process "the way an engineer would", which takes me back to this morning's post re: Andy Grove. One thing he emphasizes is that he believes that the failure rate is too high because the wrong targets are being picked, and that target validation would be a good thing to improve.

He's also beating the drum for new targets to come out of more sequencing of human genomes, but that's something I'll reserve judgment on. The second clip has some discussion of the DARPA-backed toxicology chip and some questions on repurposing existing drugs. The third clip talks about the FDA's role in all this, and tries to clarify what NIH's role would be in outlicensing any discoveries. (Collins also admits along the way that the whole NCATS proposal has needed some clarifying as well, and doesn't sound happy with some of the press coverage).

Part 5 (part 4 is just a short wrap-up) discusses the current funding environment, and then moves into ethics and conflicts of interest - other people's conflicts, I should note. Worth a lunchtime look!

Comments (16) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development

A Note to Andy Grove

Email This Entry

Posted by Derek

Readers will recall my occasional pieces on Intel legend Andy Grove's idea for drug discovery. (The first one wasn't too complimentary; the second was a bit more neutral). You always wonder, when you have a blog, if the people you're writing about have a chance to see what you've said - well, in this case, that question's been answered. Here's a recent article by Lisa Krieger in the San Jose Mercury News, detailing Grove's thoughts on medical innovation. Near the end, there's this:

Some biotech insiders are angered by Grove's dismissal of their dedication to the cause.

"It would be daft to suggest that if biopharma simply followed the lead of the semiconductor industry, all would be well," wrote Kevin Davies in the online journal Bio-IT World.com. "The semiconductor industry doesn't have the complex physiology of the human body -- or the FDA, for that matter, to contend with."

In his blog "In The Pipeline," biochemist Derek Lowe called Grove "rich, famous, smart and wrong." Grove's recent editorial, Lowe said, "is not a crazy idea, but I think it still needs some work. ... The details of it, which slide by very quickly in Grove's article, are the real problems. Aren't they always?"

Grove sighed.

"Sticks and stones. ... There were brutal comments but I don't care. The typical comment is 'Chips are not people, go (expletive) yourself.' But to not look over to the other side to see what other people in other professions have done -- that is a lazy intellectual activity."

My purpose in these posts, of course, has not been to insult Andy Grove. That doesn't get any of us anywhere. What I'd like to do, though, since he's clearly sincere about trying to speed up the pace of drug discovery (and with good reason), is to help get him up to speed on what it's like to actually discover drugs. It's not his field; it is mine. But I should note here that being an "expert" in drug discovery doesn't exactly give you a lot of great tools to insure success, unfortunately. What it does give you is the rough location of a lot of sinkholes that you might want to try to avoid. ("So you can go plunge into new, unexplored sinkholes", says a voice from the back.)

Grove's certainly a man worth taking seriously, and I hope that he, in turn, takes seriously those of us over here in the drug industry. This really is a strange business, and it's worth getting to know it. People like me - and there are still a lot of us, although it seems from all the layoffs that there are fewer every month - are the equivalents of the chip designers and production engineers at Intel. We have one foot in the labs, trying to troubleshoot this or that process, and figure out what the latest results mean. And we have one foot in the offices, where we try to see where the whole effort is going, and where it should go next. I think that perspectives from this level of drug research would be useful for someone like Andy Grove to experience: not so far down in the details that you can't see the sky, but not so far up in the air that all you see are the big, sweeping vistas.

And conversely, I think that we should take him up on his offer to look at what people in the chip industry (and others) have done. It can't hurt; we definitely need all the help we can get over here. I can't, off the top of my head, see many things that we could pick up on, for the reasons given in those earlier posts, but then again, I haven't worked over there, in the same way that Andy Grove hasn't worked over here. It's worth a try - and if anyone out there in the readership (journalist, engineer, what have you) would like to forward that on to Grove himself, please do. I'm always surprised at just how many people around the industry read this site, and to start a big discussion among people who actually do drug discovery, you could do worse.

Comments (46) + TrackBacks (0) | Category: Drug Development

October 17, 2011

Harvard to the Rescue

Email This Entry

Posted by Derek

Harvard is announcing a big initiative in systems biology, which is an interdisciplinary opportunity if there ever was one.

The Initiative in Systems Pharmacology is a signature component of the HMS Program in Translational Science and Therapeutics. There are two broad goals: first, to increase significantly our knowledge of human disease mechanisms, the nature of heterogeneity of disease expression in different individuals, and how therapeutics act in the human system; and second — based on this knowledge — to provide more effective translation of ideas to our patients, by improving the quality of drug candidates as they enter the clinical testing and regulatory approval process, thereby aiming to increase the number of efficacious diagnostics and therapies reaching patients.

All worthy stuff, of course. But there are a few questions that come up. These drug candidates that Harvard is going to be improving the quality of. . .whose are those, exactly? Harvard doesn't develop drugs, you know, although you might not realize that if you just read the press releases. And the e-mail announcement sent out to the Harvard Medical School list is rather less modest about the whole effort:

With this Initiative in Systems Pharmacology, Harvard Medical School is reframing classical pharmacology and marshaling its unparalleled intellectual resources to take a novel approach to an urgent problem: The alarming slowdown in development of new and lifesaving drugs.

A better understanding of the whole system of biological molecules that controls medically important biological behavior, and the effects of drugs on that system, will help to identify the best drug targets and biomarkers. This will help to select earlier the most promising drug candidates, ultimately making drug discovery and development faster, cheaper and more effective. A deeper understanding will also help clinicians personalize drug therapies, making better use of medicine we already have.

Again with all those drug candidates - and again, whose candidates are they going to be selecting? Don't get me wrong; I actually wish everyone well in this effort. There really are a lot of excellent scientists at Harvard, even if they tell you so, and this is the sort of problem that can take (and has taken) everything that people can throw at it. But it's also worth remembering Harvard's approach to licensing and industrial collaboration. It's. . .well, let's just say that they didn't get that endowment up to its present size by letting much slip through their fingers. Many are those who've negotiated with the university and come away wanting to add ". . .et Pecunia" to that Latin motto.

So we'll see what comes out of this. But Harvard Medical School is indeed on the case.

Comments (41) + TrackBacks (0) | Category: Drug Development

October 11, 2011

Too Many Cancer Drugs? Too Few? About Right?

Email This Entry

Posted by Derek

According to Bruce Booth (@LifeSciVC on Twitter), Ernst & Young have estimated the proportion of drugs in the clinic in the US that are targeting cancer. Anyone want to pause for a moment to make a mental estimate of their own?

Well, I can tell you that I was a bit low. The E&Y number is 44%. The first thought I have is that I'd like to see that in some historical perspective, because I'd guess that it's been climbing for at least ten years now. My second thought is to wonder if that number is too high - no, not whether the estimate is too high. Assuming that the estimate is correct, is that too high a proportion of drug research being spent in oncology, or not?

Several factors led to the rise in the first place - lots of potential targets, ability to charge a lot for anything effective, an overall shorter and more definitive clinical pathway, no need for huge expensive ad campaigns to reach the specialists. Have these caused us to overshoot?

Comments (22) + TrackBacks (0) | Category: Cancer | Clinical Trials | Drug Development | Drug Industry History

October 7, 2011

Different Drug Companies Make Rather Different Compounds

Email This Entry

Posted by Derek

Now here's a paper, packed to the edges with data, on what kinds of drug candidate compounds different companies produce. The authors assembled their list via the best method available to outsiders: they looked at what compounds are exemplified in patent filings

What they find is that over the 2000-2010 period that not much change has taken place, on average, in the properties of the molecules that are showing up. Note that we're assuming, for purposes of discussion, that these properties - things like molecular weight, logP, polar surface area, amount of aromaticity - are relevant. I'd have to say that they are. They're not the end of the discussion, because there are plenty of drugs that violate one or more of these criteria. But there are even more that don't, and given the finite amount of time and money we have to work with, you're probably better off approaching a new target with five hundred thousand compounds that are well within the drug-like properties boxes rather than five hundred thousand that aren't. And at the other end of things, you're probably better off with ten clinical candidates that mostly fit versus ten that mostly don't.

But even if overall properties don't seem to be changing much, that doesn't mean that there aren't differences between companies. That's actually the main thrust of the paper: the authors compare Abbott, Amgen, AstraZeneca, Bayer-Schering, Boehringer, Bristol-Myers Squibb, GlaxoSmithKline, J&J, Lilly, Merck, Novartis, Pfizer, Roche, Sanofi, Schering-Plough, Takeda, Wyeth, and Vertex. Of course, these organizations filed different numbers of patents, on different targets, with different numbers of compounds. For the record, Merck and GSK filed the most patents during those ten years (over 1500), while Amgen and Takeda filed the fewest (under 300). Merck and BMS had the largest number of unique compounds (over 70,000), and Takeda and Bayer-Schering had the fewest (in the low 20,000s). I should note that AstraZeneca just missed the top two in both patents and compounds.
radar%20plot.jpg
If you just look at the raw numbers, ignoring targeting and therapeutic areas, Wyeth, Bayer-Schering, and Novartis come out looking the worst for properties, while Vertex and Pfizer look the best. But what's interesting is that even after you correct for targets and the like, that organizations still differ quite a bit in the sorts of compounds that they turn out. Takeda, Lilly, and Wyeth, for example, were at the top of the cLogP rankings (numberically, "top" meaning the greasiest). Meanwhile, Vertex, Pfizer, and AstraZeneca were at the other end of the scale in cLogP. In molecular weight, Novartis, Boehringer, and Schering-Plough were at the high end (up around 475), while Vertex was at the low end (around 425). I'm showing a radar-style plot from the paper where they cover several different target-unbiased properties (which have been normalized for scale), and you can see that different companies do cover very different sorts of space. (The numbers next to the company names are the total number of shared targets found and the total number of shared-target observations used - see the paper if you need more details on how they compiled the numbers).

Now, it's fair to ask how relevant the whole sweep of patented compounds might be, since only a few ever make it deep into the clinic. And some companies just have different IP approaches, patenting more broadly or narrowly. But there's an interesting comparison near the end of the paper, where the authors take a look at the set of patents that cover only single compounds. Now, those are things that someone has truly found interesting and worth extra layers of IP protection, and they average to significantly lower molecular weights, cLogP values, and number of rotatable bonds than the general run of patented compounds. Which just gets back to the points I was making in the first paragraph - other things being equal, that's where you'd want to spend more of your time and money.

What's odd is that the trends over the last ten years haven't been more pronounced. As the paper puts it:

blockquote>Over the past decade, the mean overall physico-chemical space used by many pharmaceutical companies has not changed substantially, and the overall output remains worryingly at the periphery of historical oral drug chemical space. This is despite the fact that potential candidate drugs, identified in patents protecting single compounds, seem to reflect physiological and developmental pressures, as they have improved drug-like properties relative to the full industry patent portfolio. Given these facts, and the established influence of molecular properties on ADMET risks and pipeline progression, it remains surprising that many organizations are not adjusting their strategies.

The big question that this paper leaves unanswered, because there's no way for them to answer it, is how these inter-organizational differences get going and how they continue. I'll add my speculations in another post - but speculations they will be.

Comments (30) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

September 2, 2011

How Many New Drug Targets Aren't Even Real?

Email This Entry

Posted by Derek

So, are half the interesting new results in the medical/biology/med-chem literature impossible to reproduce? I linked earlier this year to an informal estimate