Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

« Drug Development | Drug Industry History | Drug Prices »

August 28, 2014

The Early FDA

Email This Entry

Posted by Derek

Here's a short video history of the FDA, courtesy of BioCentury TV. The early days, especially Harvey Wiley and the "Poison Squad", are truly wild and alarming by today's standards. But then, the products that were on the market back then were pretty alarming, too. . .

Comments (2) + TrackBacks (0) | Category: Drug Industry History | Regulatory Affairs

Drug Repurposing

Email This Entry

Posted by Derek

A reader has sent along the question: "Have any repurposed drugs actually been approved for their new indication?" And initially, I thought, confidently but rather blankly, "Well, certainly, there's. . . and. . .hmm", but then the biggest example hit me: thalidomide. It was, infamously, a sedative and remedy for morning sickness in its original tragic incarnation, but came back into use first for leprosy and then for multiple myeloma. The discovery of its efficacy in leprosy, specifically erythema nodosum laprosum, was a complete and total accident, it should be noted - the story is told in the book Dark Remedy. A physician gave a suffering leprosy patient the only sedative in the hospital's pharmacy that hadn't been tried, and it had a dramatic and unexpected effect on their condition.

That's an example of a total repurposing - a drug that had actually been approved and abandoned (and how) coming back to treat something else. At the other end of the spectrum, you have the normal sort of market expansion that many drugs undergo: kinase inhibitor Insolunib is approved for Cancer X, then later on for Cancer Y, then for Cancer Z. (As a side note, I would almost feel like working for free for a company that would actually propose "insolunib" as a generic name. My mortgage banker might not see things the same way, though). At any rate, that sort of thing doesn't really count as repurposing, in my book - you're using the same effect that the compound was developed for and finding closely related uses for it. When most people think of repurposing, they're thinking about cases where the drug's mechanism is the same, but turns out to be useful for something that no one realized, or those times where the drug has another mechanism that no one appreciated during its first approval.

Eflornithine, an ornithine decarboxylase inhibitor, is a good example - it was originally developed as a possible anticancer agent, but never came close to being submitted for approval. It turned out to be very effective for trypanosomiasis (sleeping sickness). Later, it was approved for slowing the growth of unwanted facial hair. This led, by the way, to an unfortunate and embarrassing period where the compound was available as a cream to improve appearance in several first-world countries, but not as a tablet to save lives in Africa. Aventis, as they were at the time, partnered with the WHO to produce the compound again and donated it to the agency and to Doctors Without Borders. (I should note that with a molecular weight of 182, that eflornithine just barely missed my no-larger-than-aspirin cutoff for the smallest drugs on the market).

Drugs that affect the immune system (cyclosporine, the interferons, anti-TNF antibodies etc.) are in their own category for repurposing, I'd say, They've had particularly broad therapeutic profiles, since that's such a nexus for infectious disease, cancer, inflammation and wound healing, and (naturally) autoimmune diseases of all sorts. Orencia (abatacept) is an example of this. It's approved for rheumatoid arthritis, but has been studied in several other conditions, and there's a report that it's extremely effective against a common kidney condition, focal segmental glomerulosclerosis. Drugs that affect the central or peripheral nervous system also have Swiss-army-knife aspects, since that's another powerful fuse box in a living system. The number of indications that a beta-blocker like propanolol has seen is enough evidence on its own!

C&E News did a drug repurposing story a couple of years ago, and included a table of examples. Some others can be found in this Nature Reviews Drug Discovery paper from 2004. I'm not aware of any new repurposing/repositioning approvals since then, but there's an awful lot of preclinical and clinical activity going on.

Comments (35) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | Regulatory Affairs

August 27, 2014

The Smallest Drugs

Email This Entry

Posted by Derek

Here is the updated version of the "smallest drugs" collection that I did the other day. Here are the criteria I used: the molecular weight cutoff was set, arbitrarily, at aspirin's 180. I excluded the inhaled anaesthetics, only allowing things that are oils or solids in their form of use. As a small-molecule organic chemist, I only allowed organic compounds - lithium and so on are for another category. And the hardest one was "Must be in current use across several countries". That's another arbitrary cutoff, but it excludes pemoline (176), for example, which has basically been removed from the market. It also gets rid of a lot of historical things like aminorex. That's not to say that there aren't some old drugs on the remaining list, but they're still in there pitching (even sulfanilamide, interestingly). I'm sure I've still missed a few.

What can be learned from this exercise? Well, take a look at those structures. There sure are a lot of carboxylic acids and phenols, and a lot more sulfur than we're used to seeing. And pretty much everything is polar, very polar, which makes sense: if you're down in this fragment-sized space, you've got to be making some strong interactions with biological targets. These are fragments that are also drugs, so fragment-based drug discovery people may find this interesting as the bedrock layer of the whole field.

Some of these are pretty specialized and obscure - you're only going to see pralidoxime if you have the misfortune to be exposed to nerve gas, for example. But there are some huge, huge compounds on the list, too, gigantic sellers that have changed their whole therapeutic areas and are still in constant use. Metformin alone is a constant rebuke to a lot of our med-chem prejudices: who among us, had we never heard of it, would not have crossed it off our lists of screening hits? So give these small things a chance, and keep an open mind. They're real, and they can really be drugs.
Smallest%20drugs%20final%20set2.jpg

Comments (30) + TrackBacks (0) | Category: Chemical News | Drug Industry History

August 26, 2014

A New Look at Phenotypic Screening

Email This Entry

Posted by Derek

There have been several analyses that have suggested that phenotypic drug discovery was unusually effective in delivering "first in class" drugs. Now comes a reworking of that question, and these authors (Jörg Eder, Richard Sedrani, and Christian Wiesmann of Novartis) find plenty of room to question that conclusion.

What they've done is to deliberately focus on the first-in-class drug approvals from 1999 to 2013, and take a detailed look at their origins. There have been 113 such drugs, and they find that 78 of them (45 small molecules and 33 biologics) come from target-based approaches, and 35 from "systems-based" approaches. They further divide the latter into "chemocentric" discovery, based around known pharmacophores, and so on, versus pure from-the-ground-up phenotypic screening, and the 33 systems compounds then split out 25 to 8.

As you might expect, a lot of these conclusions depend on what you classify as "phenotypic". The earlier paper stopped at the target-based/not target-based distinction, but this one is more strict: phenotypic screening is the evaluation of a large number of compounds (likely a random assortment) against a biological system, where you look for a desired phenotype without knowing what the target might be. And that's why this paper comes up with the term "chemocentric drug discovery", to encompass isolation of natural products, modification of known active structures, and so on.

Such conclusions also depend on knowing what approach was used in the original screening, and as everyone who's written about these things admits, this isn't always public information. The many readers of this site who've seen a drug project go from start to finish will appreciate how hard it is to find an accurate retelling of any given effort. Stuff gets left out, forgotten, is un- (or over-)appreciated, swept under the rug, etc. (And besides, an absolutely faithful retelling, with every single wrong turn left in, would be pretty difficult to sit through, wouldn't it?) At any rate, by the time a drug reaches FDA approval, many of the people who were present at the project's birth have probably scattered to other organizations entirely, have retired or been retired against their will, and so on.

But against all these obstacles, the authors seem to have done as thorough a job as anyone could possibly do. So looking further at their numbers, here are some more detailed breakdowns. Of those 45 first-in-class small molecules, 21 were from screening (18 of those high-throughput screening, 1 fragment-based, 1 in silico, and one low-throughput/directed screening). 18 came from chemocentric approaches, and 6 from modeling off of a known compound.

Of the 33 systems-based drugs, those 8 that were "pure phenotypic" feature one antibody (alemtuzumab) which was raised without knowledge of its target, and seven small molecules: sirolimus, fingolimod, eribulin, daptomycin, artemether–lumefantrine, bedaquiline and trametinib. The first three of those are natural products, or derived from natural products. Outside of fingolimod, all of them are anti-infectives or antiproliferatives, which I'd bet reflects the comparative ease of running pure phenotypic assays with those readouts.

Here are the authors on the discrepancies between their paper and the earlier one:

At first glance, the results of our analysis appear to sig­nificantly deviate from the numbers previously pub­lished for first­-in­-class drugs, which reported that of the 75 first-­in-­class drugs discovered between 1999 and 2008, 28 (37%) were discovered through phenotypic screening, 17 (23%) through target-­based approaches, 25 (33%) were biologics and five (7%) came from other approaches. This discrepancy occurs for two reasons. First, we consider biologics to be target­-based drugs, as there is little philosophical distinction in the hypothesis­ driven approach to drug discovery for small­-molecule drugs versus biologics. Second, the past 5 years of our analysis time frame have seen a significant increase in the approval of first-­in-­class drugs, most of which were discovered in a target­-based fashion.

Fair enough, and it may well be that many of us have been too optimistic about the evidence for the straight phenotypic approach. But the figure we don't have (and aren't going to get) is the overall success rate for both techniques. The number of target-based and phenotypic-based screening efforts that have been quietly abandoned - that's what we'd need to have to know which one has the better delivery percentage. If 78/113 drugs, 69% of the first-in-class approvals from the last 25 years, have come from target-based approaches how does that compare with the total number of first-in-class drug projects? My own suspicion is that target-based drug discovery has accounted for more than 70% of the industry's efforts over that span, which would mean that systems-based approaches have been relatively over-performing. But there's no way to know this for sure, and I may just be coming up with something that I want to hear.

That might especially be true when you consider that there are many therapeutic areas where phenotypic screening basically impossible (Alzheimer's, anyone?) But there's a flip side to that argument: it means that there's no special phenotypic sauce that you can spread around, either. The fact that so many of those pure-phenotypic drugs are in areas with such clear cellular readouts is suggestive. Even if phenotypic screeningwere to have some statistical advantage, you can't just go around telling people to be "more phenotypic" and expect increased success, especially outside anti-infectives or antiproliferatives.

The authors have another interesting point to make. As part of their analysis of these 113 first-in-class drugs, they've tried to see what the timeline is from the first efforts in the area to an approved drug. That's not easy, and there are some arbitrary decisions to be made. One example they give is anti-angiogenesis. The first report of tumors being able to stimulate blood vessel growth was in 1945. The presence of soluble tumor-derived growth factors was confirmed in 1968. VEGF, the outstanding example of these, was purified in 1983, and was cloned in 1989. So when did the starting pistol fire for drug discovery in this area? The authors choose 1983, which seems reasonable, but it's a judgment call.

So with all that in mind, they find that the average lead time (from discovery to drug) for a target-based project is 20 years, and for a systems-based drug it's been 25 years. They suggest that since target-based drug discovery has only been around since the late 1980s or so, that its impact is only recently beginning to show up in the figures, and that it's in much better shape than some would suppose.

The data also suggest that target-­based drug discovery might have helped reduce the median time for drug discovery and development. Closer examination of the differences in median times between systems­-based approaches and target­-based approaches revealed that the 5-­year median difference in overall approval time is largely due to statistically significant differences in the period from patent publication to FDA approval, where target-­based approaches (taking 8 years) took only half the time as systems­-based approaches (taking 16 years). . .

The pharmaceutical industry has often been criticized for not being sufficiently innovative. We think that our analysis indicates otherwise and perhaps even suggests that the best is yet to come as, owing to the length of time between project initiation and launch, new technologies such as high­-throughput screening and the sequencing of the human genome may only be starting to have a major impact on drug approvals. . .

Now that's an optimistic point of view, I have to say. The genome certainly still has plenty of time to deliver, but you probably won't find too many other people saying in 2014 that HTS is only now starting to have an impact on drug approvals. My own take on this is that they're covering too wide a band of technologies with such statements, lumping together things that have come in at different times during this period and which would be expected to have differently-timed impacts on the rate of drug discovery. On the other hand, I would like this glass-half-full view to be correct, since it implies that things should be steadily improving in the business, and we could use it.

But the authors take pains to show, in the last part of their paper, that they're not putting down phenotypic drug discovery. In fact, they're calling for it to be strengthened as its own discipline, and not (as they put it) just as a falling back to the older "chemocentric" methods of the 1980s and before:

Perhaps we are in a phase today similar to the one in the mid­-1980s, when systems-­based chemocentric drug discovery was largely replaced by target­-based approaches. This allowed the field to greatly expand beyond the relatively limited number of scaffolds that had been studied for decades and to gain access to many more pharmacologically active compound classes, pro­viding a boost to innovation. Now, with an increased chemical space, the time might be right to further broaden the target space and open up new avenues. This could well be achieved by investing in phenotypic screening using the compound libraries that have been established in the context of target-­based approaches. We therefore consider phenotypic screening not as a neoclassical approach that reverts to a supposedly more successful systems­-based method of the past, but instead as a logical evolution of the current target­-based activi­ties in drug discovery. Moreover, phenotypic screening is not just dependent on the use of many tools that have been established for target-­based approaches; it also requires further technological advancements.

That seems to me to be right on target: we probably are in a period just like the mid-to-late 1980s. In that case, though, a promising new technology was taking over because it seemed to offer so much more. Today, it's more driven by disillusionment with the current methods - but that means, even more, that we have to dig in and come up with some new ones and make them work.

Comments (7) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

August 25, 2014

Small Molecules - Really, Really Small

Email This Entry

Posted by Derek

Mentioning such a small compound as pirfenidone prompts me to put up the graphic shown below: these are the smallest commonly used drugs that I can think of. (OK, there's cocaine as a nasal anaesthetic - no, really - but that's where I draw the line at "commonly used". Nominations for ones that I've missed are welcome, and I'll update the list as needed. Note: four more have been added since the initial post, with more to come. This sort of thing really makes a chemist think, though - some of these compounds are very good indeed at what they do, and have been wildly successful. We need to keep an open mind about small molecules, that's for sure, no matter how small they are.

Update: see this follow-up post for the latest version of the graphic.

Comments (50) + TrackBacks (0) | Category: Drug Industry History

August 22, 2014

The Palbociclib Saga: Or Why We Need a Lot of Drug Companies

Email This Entry

Posted by Derek

Science has an article by journalist Ken Garber on palbociclib, the Pfizer CDK4 compound that came up here the other day when we were discussing their oncology portfolio. You can read up on the details of how the compound was put in the fridge for several years, only to finally emerge as one of the company's better prospects. The roots of the project go back to about 1995 at Parke-Davis:

Because the many CDK family members are almost identical, “creating a truly selective CDK4 inhibitor was very difficult,” says former Parke-Davis biochemist Dave Fry, who co-chaired the project with chemist Peter Toogood. “A lot of pharmaceutical companies failed at it, and just accepted broad-spectrum CDK inhibitors as their lead compounds.” But after 6 years of work, the pair finally succeeded with the help of some clever screens that could quickly weed out nonspecific “dirty” compounds.

Their synthesis in 2001 of palbociclib, known internally as PD-0332991, was timely. By then, many dirty CDK inhibitors from other companies were already in clinical trials, but they worked poorly, if at all. Because they hit multiple CDK targets, these compounds caused too much collateral damage to normal cells. . .Eventually, most efforts to fight cancer by targeting the cell cycle ground to a halt. “Everything sort of got hung up, and I think people lost enthusiasm,” Slamon says.

PD-0332991 fell off the radar screen. Pfizer, which had acquired Warner-Lambert/Parke-Davis in 2000 mainly for the cholesterol drug Lipitor, did not consider the compound especially promising, Fry says, and moved it forward haltingly at best. “We had one of the most novel compounds ever produced,” Fry says, with a mixture of pride and frustration. “The only compound in its class.”

A major merger helped bury the PD-0332991 program. In 2003, Pfizer acquired Swedish-American drug giant Pharmacia, which flooded Pfizer's pipeline with multiple cancer drugs, all competing for limited clinical development resources. Organizational disarray followed, says cancer biologist Dick Leopold, who led cancer drug discovery at the Ann Arbor labs from 1989 to 2003. “Certainly there were some politics going on,” he says. “Also just some logistics with new management and reprioritization again and again.” In 2003, Pfizer shut down cancer research in Ann Arbor, which left PD-0332991 without scientists and managers who could demand it be given a chance, Toogood says. “All compounds in this business need an advocate.”

So there's no doubt that all the mergers and re-orgs at Pfizer slowed this compound down, and no doubt a long list of others, too. The problems didn't end there. The story goes on to show how the compound went into Phase I in 2004, but only got into Phase II in 2009. The problem is, well before that time it was clear that there were tumor types that should be more sensitive to CDK4 inhibition. See this paper from 2006, for example (and there were some before this as well).

It appears that Pfizer wasn't going to develop the compound at all (thus that long delay after Phase I). They made it available as a research tool to Selina Chen-Kiang at Weill Cornell, who saw promising results with mantle cell lymphoma, then Dennis Slamon and RIchard Finn at UCLA profiled the compound in breast cancer lines and took it into a small trial there, with even more impressive results. And at this point, Pfizer woke up.

Before indulging in a round of Pfizer-bashing, though, It's worth remembering that stories broadly similar to this are all too common. If you think that the course of true love never did run smooth, you should see the course of drug development. Warner-Lambert (for example) famously tried to kill Lipitor more than once during its path to the market, and it's a rare blockbuster indeed that hasn't passed through at least one near-death-experience along the way. It stands to reason: since the great majority of all drug projects die, the few that make it through are the ones that nearly died.

There are also uncounted stories of drugs that nearly lived. Everyone who's been around the industry for a while has, or has heard, tales of Project X for Target Y, which was going along fine and looked like a winner until Company Z dropped for Stupid Reason. . .uh, Aleph. (Ran out of letters there). And if only they'd realized this, that, and the other thing, that compound would have made it to market, but no, they didn't know what they had and walked away from it, etc. Some of these stories are probably correct: you know that there have to have been good projects dropped for the wrong reasons and never picked up again. But they can't all be right. Given the usual developmental success rates, most of these things would have eventually wiped out for some reason. There's an old saying among writers that the definition of a novel is a substantial length of narrative fiction that has something wrong with it. In the same way, every drug that's on the market has something wrong with it (usually several things), and all it takes is a bit more going wrong to keep it from succeeding at all.

So where I fault Pfizer in all this is in the way that this compound got lost in all the re-org shuffle. If it had developed more normally, its activity would have been discovered years earlier. Now, it's not like there are dozens of drugs that haven't made it to market because Pfizer dropped the ball on them - but given the statistics, I'll bet that there are several (two or three? five?) that could have made it through by now, if everyone hadn't been so preoccupied with merging, buying, moving, rearranging, and figuring out if they were getting laid off or not.

The good thing is that other companies stepped into the field on the basis of those earlier publications, and found CDK4/6 inhibitors of their own (notably Novartis and Lilly). This is why I think that huge mergers hurt the intellectual health of the drug industry. Take it to the reducio ad not all that absurdum of One Big Drug Company. If we had that, and only that, then whole projects and areas of research would inevitably get shelved, and there would be no one left to pick them up at all. (I'll also note, in passing, that should all of the CDK inhibitors make it to market, that there will be yahoos who decry the whole thing as nothing but a bunch of fast-follower me-too drugs, waste of time and money, profits before people, and so on. Watch for it.)

Comments (13) + TrackBacks (0) | Category: Cancer | Drug Development | Drug Industry History

August 20, 2014

Did Pfizer Cut Back Some of Its Best Compounds?

Email This Entry

Posted by Derek

John LaMattina has a look at Pfizer's oncology portfolio, and what their relentless budget-cutting has been doing to it. The company is taking some criticism for having outlicensed two compounds (tremelimumab to AstraZeneca and neratinib to Puma) which seem to be performing very well after Pfizer ditched them. Here's LaMattina (a former Pfizer R&D head, for those who don't know):

Unfortunately, over 15 years of mergers and severe budget cuts, Pfizer has not been able to prosecute all of the compounds in its portfolio. Instead, it has had to make choices on which experimental medicines to keep and which to set aside. However, as I have stated before, these choices are filled with uncertainties as oftentimes the data in hand are far from complete. But in oncology, Pfizer seems to be especially snake-bit in the decisions it has made.

That goes for their internal compounds, too. As LaMattina goes one to say, palbociclib is supposed to be one of their better compounds, but it was shelved for several years due to more budget-cutting and the belief that the effort would be better spent elsewhere. It would be easy for an outside observer to whack away at the company and wonder how incompetent they could be to walk away from all these winners, but that really isn't fair. It's very hard in oncology to tell what's going to work out and what isn't - impossible, in fact, after compounds have progressed to a certain stage. The only way to be sure is to take these things on into the clinic and see, unfortunately (and there you have one of the reasons things are so expensive around here).

Pfizer brought up more interesting compounds than it later was able to develop. It's a good question to wonder what they could have done with these if they hadn't been pursuing their well-known merger strategy over these years, but we'll never know the answer to that one. The company got too big and spent too much money, and then tried to cure that by getting even bigger. Every one of those mergers was a big disruption, and you sometimes wonder how anyone kept their focus on developing anything. Some of its drug-development choices were disastrous and completely their fault (the Exubera inhaled-insulin fiasco, for example), but their decisions in their oncology portfolio, while retrospectively awful, were probably quite defensible at the time. But if they hadn't been occupied with all those upheavals over the last ten to fifteen years, they might have had a better chance on focusing on at least a few more of their own compounds.

Their last big merger was with Wyeth. If you take Pfizer's R&D budget and Wyeth's and add them, you don't get Pfizer's R&D post-merger. Not even close. Pfizer's R&D is smaller now than their budget was alone before the deal. Pyrrhus would have recognized the problem.

Comments (20) + TrackBacks (0) | Category: Business and Markets | Cancer | Drug Development | Drug Industry History

July 30, 2014

Abandoned Pharma

Email This Entry

Posted by Derek

It's not the most cheerful topic in the world, but NPR recently had an item on the decommissioned pharma research sites of New Jersey (of which there are many). Some of these are quite large, and correspondingly hard to unload onto anyone else. (This is, of course, a problem that is not unique to New Jersey, with plenty of ex-pharma sites around the US and the UK in particular falling into this category).

I got to see this in an earlier and less severe form when I worked at Schering-Plough: the company's old Bloomfield site proved difficult to deal with in the early 1990s once everyone had moved out of it. No other company (large or small) wanted it, and I was told that an attempt to more or less donate it to Rutgers University had fallen through as well. In the end, the buildings were demolished and the land was sold, with a Home Depot (and its parking lot) taking up a good part of the space.

That's always one option. Another is what happened to my next stop in pharma, the Bayer campus at West Haven. Yale picked that one up for what we heard was a good price, and turned it into the Yale West Research Campus. So that at least keeps the place doing research, which has to beat turning it into a hardware store. Breaking things up into an incubator for smaller companies is a good plan, too, when it can be done.

Comments (51) + TrackBacks (0) | Category: Drug Industry History

July 25, 2014

The Antibiotic Gap: It's All of the Above

Email This Entry

Posted by Derek

Here's a business-section column at the New York Times on the problem of antibiotic drug discovery. To those of us following the industry, the problems of antibiotic drug discovery are big pieces of furniture that we've lived with all our lives; we hardly even notice if we bump into them again. You'd think that readers of the Times or other such outlets would have come across the topic a few times before, too, but there must always be a group for which it's new, no matter how many books and newspaper articles and magazine covers and TV segments are done on it. It's certainly important enough - there's no doubt that we really are going to be in big trouble if we don't keep up the arms race against the bacteria.

This piece takes the tack of "If drug discovery is actually doing OK, where are the new antibiotics?" Here's a key section:

Antibiotics face a daunting proposition. They are not only becoming more difficult to develop, but they are also not obviously profitable. Unlike, say, cancer drugs, which can be spectacularly expensive and may need to be taken for life, antibiotics do not command top dollar from hospitals. What’s more, they tend to be prescribed for only short periods of time.

Importantly, any new breakthrough antibiotic is likely to be jealously guarded by doctors and health officials for as long as possible, and used only as a drug of last resort to prevent bacteria from developing resistance. By the time it became a mass-market drug, companies fear, it could be already off patent and subject to competition from generics that would drive its price down.

Antibiotics are not the only drugs getting the cold shoulder, however. Research on treatments to combat H.I.V./AIDS is also drying up, according to the research at Yale, mostly because the cost and time required for development are increasing. Research into new cardiovascular therapies has mostly stuck to less risky “me too” drugs.

This mixes several different issues, unfortunately, and if a reader doesn't follow the drug industry (or medical research in general), then they may well not realize this. (And that's the most likely sort of reader for this article - people who do follow such things have heard all of this before). The reason that cardiovascular drug research seems to have waned is that we already have a pretty good arsenal of drugs for the most common cardiovascular conditions. There are a huge number of options for managing high blood pressure, for example, and they're mostly generic drugs by now. The same goes for lowering LDL: it's going to be hard to beat the statins, especially generic Lipitor. But there is a new class coming along targeting PCSK9 that is going to try to do just that. This is a very hot area of drug development (as the author of the Times column could have found without much effort), although the only reason it's so big is that PCSK9 is the only pathway known that could actually be more effective at lowering LDL than the statins. (How well it does that in the long term, and what the accompanying safety profile might be, are the subject of ongoing billion-dollar efforts). The point is, the barriers to entry in cardiovascular are, by now, rather high: a lot of good drugs are known that address a lot of the common problems. If you want to go after a new drug in the space, you need a new mechanism, like PCSK9 (and those are thin on the ground), or you need to find something that works against some of the unmet needs that people have already tried to fix and failed (such as stroke, a notorious swamp of drug development which has swallowed many large expeditions without a trace).

To be honest, HIV is a smaller-scale version of the same thing. The existing suite of therapies is large and diverse, and keeps the disease in check in huge numbers of patients. All sorts of other mechanisms have been tried as well, and found wanting in the development stage. If you want to find a new drug for HIV, you have a very high entry barrier again, because pretty most of the reasonable ways to attack the problem have already been tried. The focus now is on trying to "flush out" latent HIV from cells, which might actually lead to a cure. But no one knows yet if that's feasible, how well it will work when it's tried, or what the best way to do it might be. There were headlines on this just the other day.

The barriers to entry in the antibiotic field area similarly high, and that's what this article seems to have missed completely. All the known reasonable routes of antibiotic action have been thoroughly worked over by now. As mentioned here the other day, if you just start screening your million-compound libraries against bacteria to see what kills them, you will find a vast pile of stuff that will kill your own cells, too, which is not what you want, and once you've cleared those out, you will find a still-pretty-vast pile of compounds that work through mechanisms that we already have antibiotics targeting. Needles in haystacks have nothing on this.

In fact, a lot of not-so-reasonable routes have been worked over, too. I keep sending people to this article, which is now seven years old and talks about research efforts even older than that. It's the story of GlaxoSmithKline's exhaustive antibiotics research efforts, and it also tells you how many drugs they got out of it all in the end: zip. Not a thing. From what I can see, the folks who worked on this over the last fifteen or twenty years at AstraZeneca could easily write the same sort of article - they've published all kinds of things against a wide variety of bacterial targets, and I don't think any of it has led to an actual drug.

This brings up another thing mentioned in the Times column. Here's the quote:

This is particularly striking at a time when the pharmaceutical industry is unusually optimistic about the future of medical innovation. Dr. Mikael Dolsten, who oversees worldwide research and development at Pfizer, points out that if progress in the 15 years until 2010 or so looked sluggish, it was just because it takes time to figure out how to turn breakthroughs like the map of the human genome into new drugs.

Ah, but bacterial genomes were sequenced before the human one was (and they're more simple, at that). Keep in mind also that proof-of-concept for new targets can be easier to obtain in bacteria (if you manage to find any chemical matter, that is). I well recall talking with a bunch of people in 1997 who were poring over the sequence data for a human pathogen, fresh off the presses, and their optimism about all the targets that they were going to find in there, and the great new approaches they were going to be able to take. They tried it. None of it worked. Over and over, none of it worked. People had a head start in this area, genomically speaking, with an easier development path than many other therapeutic areas, and still nothing worked.

So while many large drug companies have exited antibiotic research over the years, not all of them did. But the ones that stayed have poured effort and money, over and over, down a large drain. Nothing has come out of the work. There are a number of smaller companies in the space as well, for whom even a small success would mean a lot, but they haven't been having an easy time of it, either.

Now, one thing the Times article gets right is that the financial incentives for new antibiotics are a different thing entirely than the rest of the drug discovery world. Getting one of these new approaches in LDL or HIV to work would at least be highly profitable - the PCSK9 competitors certainly are working on that basis. Alzheimer's is another good example of an area that has yielded no useful drugs whatsoever despite ferocious amounts of effort, but people keep at it because the first company to find a real Alzheimer's drug will be very well rewarded indeed. (The Times article says that this hasn't been researched enough, either, which makes me wonder what areas have been). But any great new antibiotic would be shelved for emergencies, and rightly so.

But that by itself is not enough to explain the shortage of those great new antibiotics. It's everything at once: the traditional approaches are played out and the genomic-revolution stuff has been tried, so the unpromising economics makes the search for yet another approach that much harder.

Note: be sure to see the comments for perspectives from others who've also done antibiotic research, including some who disagree. I don't think we'll find anyone who says it's easy, though, but you never know.

Comments (56) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Infectious Diseases

July 15, 2014

K. C. Nicolaou on Drug Discovery

Email This Entry

Posted by Derek

K. C. Nicolaou has an article in the latest Angewandte Chemie on the future of drug discovery, which may seem a bit surprising, considering that he's usually thought of as Mister Total Synthesis, rather than Mister Drug Development Project. But I can report that it's relentlessly sensible. Maybe too sensible. It's such a dose of the common wisdom that I don't think it's going to be of much use or interest to people who are actually doing drug discovery - you've already had all these thoughts yourself, and more than once.

But for someone catching up from outside the field, it's not a bad survey at all. It gets across how much we don't know, and how much work there is to be done. And one thing that writing this blog has taught me is that most people outside of drug discovery don't have an appreciation of either of those things. Nicolaou's article isn't aimed at a lay audience, of course, which makes it a little more problematic, since many of the people who can appreciate everything he's saying will already know what he's going to say. But it does round pretty much everything up into one place.

Comments (58) + TrackBacks (0) | Category: Drug Development | Drug Industry History

July 14, 2014

How to Run a Drug Project: Are There Any Rules at All?

Email This Entry

Posted by Derek

Here's an article from David Shayvitz at Forbes whose title says it all: "Should a Drug Discovery Team Ever Throw in the Towel?" The easy answer to that is "Sure". The hard part, naturally, is figuring out when.

You don’t have to be an expensive management consultant to realize that it would be helpful for the industry to kill doomed projects sooner (though all have said it).

There’s just the prickly little problem of figuring out how to do this. While it’s easy to point to expensive failures and criticize organizations for not pulling the plug sooner, it’s also true that just about every successful drug faced some legitimate existential crisis along the way — at some point during its development , there was a plausible reason to kill the program, and someone had to fight like hell to keep it going.

The question at the heart of the industry’s productivity struggles is the extent to which it’s even possible to pick the winners (or the losers), and figuring out better ways of managing this risk.

He goes on to contrast two approaches to this: one where you have a small company, focused on one thing, with the idea being that the experienced people involved will (A) be very motivated to find ways to get things to work, and (B) motivated to do something else if the writing ever does show up on the wall. The people doing the work should make the call. The other approach is to divide that up: you set things up with a project team whose mandate is to keep going, one way or another, dealing with all obstacles as best they can. Above them is a management team whose job it is to stay a bit distant from the trenches, and be ready to make the call of whether the project is still viable or not.

As Shayvitz goes on to say, quite correctly, both of these approaches can work, and both of them can run off the rails. In my view, the context of each drug discovery effort is so variable that it's probably impossible to say if one of these is truly better than the other. The people involved are a big part of that variability, too, and that makes generalizing very risky.

The big risk (in my experience) with having execution and decision-making in the same hands is that projects will run on for too long. You can always come up with more analogs to try, more experiments to run, more last-ditch efforts to take a crack it. Coming up with those things is, I think, better than not coming up with them, because (as Shayvitz mentions) it's hard to think of a successful drug that hasn't come close to dying at least once during its development. Give up too easily, and nothing will ever work at all.

But it's a painful fact that not every project can work, no matter how gritty and determined the team. We're heading out into the unknown with these drug candidates, and we find out things that we didn't know were there to be found out. Sometimes there really is no way to get the selectivity you need with the compound series you've chosen - heck, sometimes there's no way to get it with any compound series you could possibly choose, although that takes a long time to become obvious. Sometimes the whole idea behind the project is flawed from the start: blocking Kinase X will not, in fact, alter the course of Disease Y. It just won't. The hypothesis was wrong. An execute-at-all-costs team will shrug off these fatal problems, or attempt to shrug them off, for as long as you give them money.

But there's another danger waiting when you split off the executive decision-makers. If those folks get too removed from the project (or projects) then their ability to make good decisions is impaired. Just as you can have a warped perspective when you're right on top of the problems, you can have one when you're far away from them, too. It's tempting to thing that Distance = Clarity, but that's not a linear function, by any means. A little distance can certainly give you a lot of perspective, but if you keep moving out, things can start fuzzing back up again without anyone realizing what's going on.

That's true even if the managers are getting reasonably accurate reports, and we all know that that's not always the case in the real world. In many large organizations, there's a Big Monthly Meeting of some sort (or at some other regular time point) where projects are supposed to be reviewed by just those decision makers. These meetings are subject to terrible infections of Dog-And-Pony-itis. People get up to the front of the room and they tell everyone how great things are going. They minimize the flaws and paper over the mistakes. It's human nature. Anyone inclined to give a more accurate picture has a chance to see how that's going to look, when all the other projects are going Just Fine and everyone's Meeting Their Goals like it says on the form. Over time (and it may not take much time at all), the meeting floats away into its own bubble of altered reality. Managers who realize this can try to counteract it by going directly to the person running the project team in the labs, closing the office door, and asking for a verbal update on how things are really going, but sometimes people are so out of it that they mistake how things are going at the Big Monthly Meeting for what's really happening.

So yes indeed, you can (as is so often the case) screw things up in both directions. That's what makes it so hard to law down the law about how to run a drug discovery project: there are several ways to succeed, and the ways to mess them up are beyond counting. My own bias? I prefer the small-company back-to-the-wall approach, of being ready to swerve hard and try anything to make a project work. But I'd only recommend applying that to projects with a big potential payoff - it seems silly to do that sort of thing for anything less. And I'd recommend having a few people watching the process, but from as close as they can get without being quite of the project team themselves. Just enough to have some objectivity. Simple, eh? Getting this all balanced out is the hard part. Well, actually, the science is the hard part, but this is the hard part that we can actually do something about.

Comments (14) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Life in the Drug Labs

May 23, 2014

Two Looks At Drug Industry Productivity

Email This Entry

Posted by Derek

Matthew Herper has a really interesting story in Forbes on a new report that attempts to rank biopharma companies by their R&D abilities. Richard Evans of Sector and Sovereign Health (ex-Roche) has ranked companies not on their number of drugs, but on their early-stage innovation. He counts patents, for example, but not the later ones defending existing franchises, and he also looks to see how often these patents are cited by others. As for a company's portfolio, being early into a new therapeutic area counts for a lot more than following someone else, but at the same time, he's also trying to give points for companies that avoid "Not Invented Here" behavior (a tricky balance, I'd think). The full report can be purchased, but the parts that Herper have shared are intriguing.

Ranking the companies, he has (1) Bristol-Myers Squibb, (2) Celgene, (3) Vertex, (4) Gilead, and (5) Allergan. (Note that Allergan is currently being pursued by Valeant, who will, if they buy them, pursue their sworn vow to immediately gut the company's R&D). At the bottom of his table are (18) Novartis, (19) Regeneron, (20) Bayer, (21) Lilly, and (22) Alexion. (Note that Evans himself says that his analysis may be off for companies that have only launched one product in the ten years he's covering). I'm not sure what to make of this, to be honest, and I think what would give a better picture would be if the whole analysis were done again but only with the figures from about fifteen years ago to see if what's being measured really had an effect on the futures of the various companies. That would not be easy, but (as some of Herper's sources also say), without some kind of back-testing, it's hard to say if there's something valuable here.

You can tell that Evans himself almost certainly appreciates this issue from what he has to say about the current state of the industry and the methods used to evaluate it, so the lack of a retrospective analysis is interesting. Here's the sort of thing I mean:

Too often, Evans says, pharmaceutical executives instead use the industry’s low success rates as an argument that success is right around the corner. “A gambler that has lost everything he owned, just because he now has a strong hand doesn’t make him a good gambler,” Evans says. . .

True enough. Time and chance do indeed happeneth to them all, and many are the research organizations who've convinced themselves that they're good when they might just have been lucky. (Bad luck, on the other hand, while not everyone's favorite explanation, is still trotted out a lot more often. I suspect that AstraZeneca, during that bad period they've publicly analyzed, was sure that they were just having a bad run of the dice. After all, I'm sure that some of the higher-ups there thought that they were doing everything right, so what else could it be?)
Evans%20chart.jpg

But there's a particular chart from this report that I want to highlight. This one (in case that caption is too small) plots ten-year annualized net income returns against R&D spending, minus the cost of R&D capital. Everything has been adjusted for taxes and inflation. And that doesn't look too good, does it? These numbers would seem to line up with Bernard Munos' figures showing that industry productivity has been relatively constant, but only by constantly increased spending per successful drug. They also fit with this 2010 analysis from Morgan Stanley, where they warned that the returns on invested capital in pharma were way too high, considering the risks of failure.

So in case you thought, for some reason - food poisoning, concussion - that things had turned around, no such luck, apparently. That brings up this recent paper in Nature Reviews Drug Discovery, though, where several authors from Boston Consulting Group try to make the case that productivity is indeed improving. They're used peak sales as their measure of success, and they also believe that 2008 was the year when R&D spending started to get under control.
NRDD%20chart%201.jpg

Before 2008, the combined effects of declining value outputs and ever-increasing R&D spending drove a rapid decline in R&D productivity, with many analysts questioning whether the industry as a whole would be able to return its cost of capital on R&D spending. . .we have analysed the productivity ratio of aggregate peak sales relative to R&D spending in the preceding 4 years. From a low of 0.12 in 2008, this has more than doubled to 0.29 in 2013. Through multiple engagements with major companies, we have observed that at a relatively steady state of R&D spending across the value chain, a productivity ratio of between 0.25 and 0.35 is required for a drug developer to meet its cost of capital of ~9%. Put simply, a company spending $1 billion annually on R&D needs to generate — on average — new drug approvals with $250–350 million in peak sales every year. . . So, although not approaching the productivity ratios of the late 1990s and early 2000s, the industry moved back towards an acceptable productivity ratio overall in 2013.

I would like to hope that this is correct, but I'm really not sure. This recent improvement doesn't look like much, graphically, compared to the way that things used to be. There's also a real disagreement between these two analyses, which is apparent even though the BCG chart only goes back to 1994. Its take on the mid-1990s looks a lot better than the Evans one, and this is surely due (at least partly) to the peak-sales method of evaluation. Is that a better metric, or not? You got me. One problem with it (as the authors of this paper also admit) is that you have to use peak-sale estimates to arrive at the recent figures. So with that level of fuzz in the numbers, I don't know if their chart shows recent improvement at all (as they claim), or how much.

But even the BCG method would say that the industry has not been meeting its cost-of-capital needs for the last ten years or so, which is clearly not how you want to run things. If they're right, and the crawl out of the swamp has begun, then good. But I don't know why we should have managed to do that since 2008; I don't think all that much has changed. My fear is that their numbers show an improvement because of R&D cuts, in which case, we're likely going to pay for those in a few years with a smaller number of approved drugs - because, again, I don't think anyone's found any new formula to spend the money more wisely. We shall see.

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

May 20, 2014

Where the Talent Comes From

Email This Entry

Posted by Derek

I occasionally talk about the ecosystem of the drug industry being harmed by all the disruptions of recent years, and this post by Bruce Booth is exactly the sort of thing that fits that category. He's talking about how much time it takes to get experience in this field, and what's been happening to the flow of people:

Two recent events sparked my interest in this topic of where young talent develops and emerges in our industry. A good friend and “greybeard” med chemist forwarded me a note from a chemistry professor who was trying to find a spot for his “best student”, a new PhD chemist. I said we tended to not hire new graduates into our portfolio, but was saddened to hear of this start pupil’s job challenge. Shortly after that, I had dinner with a senior chemist from Big Pharma. He said the shortest-tenured chemist on his 30+ person team was 15-year veteran. His group had shrunk in the past and had never rehired. Since hiring a “trainee” post-doc chemist “counted” as an FTE on their books, they haven’t even implemented the traditional fellowship programs that exist elsewhere. Stories like these abound.

There is indeed a steady stream of big-company veterans who depart for smaller biopharma, bringing with them their experience (and usually a desire not to spend all their time holding pre-meeting meetings and the like, fortunately). But Booth is worried about a general talent shortage that could well be coming:

The short version of the dilemma is this: biotech startups have no margin for error around very tight timelines so can’t really “train” folks in drug discovery, and because of that they rely on bigger companies as the principle source for talent; but, at the same time, bigger firms are cutting back on research hiring and training, in part while offshoring certain science roles to other geographies, and yet are looking “outside” their walls for innovation from biotechs.

While I’d argue this talent flux is fine and maybe a positive right now, it’s a classic “chicken and egg” problem for the future. Without training in bigger pharma, there’s less talent for biotech; without that talent, biotech won’t make good drugs; without good biotech drugs, there’s no innovation for pharma, and then the end is nigh.

So if Big Pharma is looking for people from the small companies while the smaller companies are looking for people from Big Pharma, it does make you wonder where the supply will eventually come from. I share some of these worries, but at the same time, I think that it's possible to learn on the job at a smaller company, in the lower-level positions, anyway. And not everyone who's working at a larger company is learning what they should be. I remember once at a previous job when we were bringing in a med-chem candidate from a big company, a guy with 8 or 9 years experience. We asked him how he got along with the people who did the assays for his projects, and he replied that well, he didn't see them much, because they were over in another building, and they weren't supposed to be hanging around there, anyway. OK, then, what about the tox or formulations people? Well, he didn't go to those meetings much, because that was something that his boss was supposed to be in charge of. And so on, and so on. What was happening was that the structure of his company was gradually crippling this guy's career. He should have known more than he did; he should have been more experienced than he really was, and the problem looked to be getting worse every year. There's plenty of blame to go around, though - not only was the structure of his research organization messing this guy up, but he himself didn't even seem to be noticing it, which was also not a good sign. This is what Booth is talking about here:

. . .the “unit of work” in drug R&D is the team, not the individual, and success is less about single expertise and more about how it gets integrated with others. In some ways, your value to the organization begins to correlate with more generalist, integrative skills rather than specialist, academic ones; with a strong R&D grounding, this “utility player” profile across drug discovery becomes increasingly valuable.

And its very hard to learn these hard and soft things, i.e., grow these noses, inside of a startup environment with always-urgent milestones to hit in order to get the next dollop of funding, and little margin of error in the plan to get there. This is true in both bricks-and-mortar startups and virtual ones.

With the former, these lab-based biotechs can spin their wheels inefficiently if they hire too heavily from academia – the “book smart” rather than “research-street smart” folks. It’s easy to keep churning out experiments to “explore” the science – but breaking the prevailing mindset of “writing the Nature paper” versus “making a drug” takes time, and this changes what experiments you do. . .

Bruce took a poll of the R&D folks associated with his own firm's roster of startups, and found that almost all of them were trained at larger companies, which certainly says something. I wonder, though, if this current form of the ecosystem is a bit of an artifact. Times have been so tough the last ten to fifteen years that there may well be a larger proportion of big-company veterans who have made the move to smaller firms, either by choice or out of necessity. (In a similar but even more dramatic example, the vast herds of buffalo and flocks of passenger pigeons described in the 19th century were partly (or maybe largely) due to the disruption of the hunting patterns of the American Indians, who had been displaced and quite literally decimated by disease - see the book 1491 for more on this).

The other side of all this, as mentioned above, is the lack of entry-level drug discovery positions in the bigger companies. Many readers here have mentioned this over the last few years, that the passing on of knowledge and experience from the older researchers to the younger ones has been getting thoroughly disrupted (as the older ones get laid off and the younger ones don't get hired). We don't want to find ourselves in the position of Casey Stengel, looking at his expansion-team Mets and asking "Don't anybody here know how to play this game?"

Booth's post has a few rays of hope near the end - read the whole thing to find them. I continue to think that drug discovery is a valuable enough activity that the incentives will keep it alive in one form or another, but I also realize that that's no guarantee, either. We (and everyone else with a stake in the matter) have to realize that we could indeed screw it up, and that we might be well along the way to doing it.

Comments (15) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Drug Industry History | How To Get a Pharma Job

May 19, 2014

AstraZeneca Looks At Its Own History, And Cringes

Email This Entry

Posted by Derek

While we're talking about AstraZeneca, here's a look at their recent drug development history from the inside. The company had undertaken a complete review of its portfolio and success rates (as well they might, given how things have been going overall).

In this article, we discuss the results of a comprehensive longitudinal review of AstraZeneca's small-molecule drug projects from 2005 to 2010. The analysis allowed us to establish a framework based on the five most important technical determinants of project success and pipeline quality, which we describe as the five 'R's: the right target, the right patient, the right tissue, the right safety and the right commercial potential. A sixth factor — the right culture — is also crucial in encouraging effective decision-making based on these technical determinants. AstraZeneca is currently applying this framework to guide its R&D teams, and although it is too early to demonstrate whether this has improved the company's R&D productivity, we present our data and analysis here in the hope that it may assist the industry overall in addressing this key challenge.

That already gets things off to a bad start, in my opinion, because I really hate those alliterative "Five Whatevers" and "Three Thingies" that companies like to proclaim. And that's not just because Chairman Mao liked that stuff, although that is reason enough to wonder a bit. I think that I suffer from Catchy Slogan Intolerance, a general disinclination to believe that reality can be usefully broken down into discrete actions and principles that just all happen to start with the same letter. I think these catchphrases quantify the unquantifiable and simplify what shouldn't be simplified. The shorter, snappier, and more poster-friendly the list of recommendations, the less chance I think they have of being any actual use. Other than setting people's teeth on edge, which probably isn't the goal.

That said, this article itself does a perfectly good job of laying out many of the things that have been going wrong in the big pharma organizations. See if any of this rings a bell for you:

. . .However, with the development of high-throughput and ultra-high-throughput screening and combinatorial chemistry approaches during the 1980s and 1990s, as well as the perception that a wealth of new targets would emerge from genomics, part of this productivity issue can also be attributed to a shift of R&D organizations towards the 'industrialization' of R&D. The aim was to drive efficiency while retaining quality, but in some organizations this led to the use of quantity-based metrics to drive productivity. The hypothesis was simple: if one drug was launched for every ten candidates entering clinical development, then doubling or tripling the number of candidates entering development should double or triple the number of drugs approved. However, this did not happen; consequently, R&D costs increased while output — as measured by launched drugs — remained static.

This volume-based approach damaged not only the quality and sustainability of R&D pipelines but, more importantly, also the health of the R&D organizations and their underlying scientific curiosity. This is because the focus of scientists and clinicians moved away from the more demanding goal of thoroughly understanding disease pathophysiology and the therapeutic opportunities, and instead moved towards meeting volume-based goals and identifying an unprecedented level of back-up and 'me too' drug candidates. In such an environment, 'truth-seeking' behaviours to understand disease biology may have been over-ridden by 'progression-driven' behaviours that rewarded scientists for meeting numerical volume-based goals.

Thought so. Pause to shiver a bit (that's what I did - it seemed to help). The AZ team looked at everything that had been active during the 2005-2010 period, from early preclinical up to the end of Phase II. What they found, compared to the best figures on industry averages, was that the company looked pretty normal in the preclinical area (as measured by number of projects and their rates of progression, anyway), and that they actually had a higher-than-usual pass rate through Phase I. Phase II, though, was nasty - they had a noticeably higher failure rate, suggesting that too many projects were being allowed to get that far. And although they weren't explicitly looking looking beyond Phase II, the authors do note that AZ's success rate at getting drugs all the way to market was significantly lower than rest of the industry's as well.

The biggest problem seemed to be safety and tox. This led to many outright failures, and to other cases where the human doses ended up limited to non-efficacious levels.

During preclinical testing, 75% of safety closures were compound-related (that is, they were due to 'off-target' or other properties of the compound other than its action at the primary pharmacological target) as opposed to being due to the primary pharmacology of the target. By contrast, the proportion of target-related safety closures rose substantially in the clinical phase and was responsible for almost half of the safety-related project closures. Such failures were often due to a collapse in the predicted margins between efficacious doses and safety outcomes, meaning it was not possible to achieve target engagement or patient benefit without incurring an unacceptable safety risk.

On top of this problem, an unacceptable number of compounds that made it through safety were failing in Phase II though lack of efficacy. There's a good analysis of how this seems to have happened, but a big underlying factor seems to have been the desire to keep progressing compounds to meet various targets. People kept pushing things ahead, because things had to be pushed ahead, and the projects kept being scooting along the ground until they rolled off into one ravine or another.

And I think that everyone with some experience in this business will know exactly what that feels like - this is not some mysterious ailment that infected AstraZeneca, although they seem to have had a more thorough case of it than usual. Taking the time to work out what a safety flag might be telling you, understand tricky details of target engagement, or figure out the right patient population or the right clinical endpoint - these things are not always popular. And to be fair, there are a near-infinite number of reasons to slow a project down (or stop it altogether), and you can't stop all of them. But AZ's experience shows, most painfully, that you can indeed stop too few of them. Here's a particularly alarming example of that:

In our analysis, another example of the impact of volume-based goals could be seen in the strategy used to select back-up drug candidates. Back-up molecules are often developed for important projects where biological confidence is high. They should be structurally diverse to mitigate the risk for the programme against compound-related issues in preclinical or early development, and/or they should confer some substantial advantage over the lead molecule. When used well, this strategy can save time and maintain the momentum of a project. However, with scientists being rewarded for the numbers of candidates coming out of the research organization, we observed multiple projects for which back-up molecules were not structurally diverse or a substantial improvement over the lead molecule. Although all back-up candidates met the chemical criteria for progression into clinical testing, and research teams were considered to have met their volume-based goals, these molecules did not contribute to the de-risking of a programme or increase project success rates. As a consequence, all back-up candidates from a 'compound family' could end up failing for the same reason as the lead compound and indeed had no higher probability of a successful outcome than the original lead molecule (Fig. 6). In one extreme case, we identified a project with seven back-up molecules in the family, all of which were regarded as a successful candidate delivery yet they all failed owing to the same preclinical toxicology finding. This overuse of back-up compounds resulted in a highly disproportionate number of back-up candidates in the portfolio. At the time of writing, approximately 50% of the AstraZeneca portfolio was composed of back-up molecules.

I'm glad this paper exists, since it can serve as a glowing, pulsing bad example to other organizations (which I'm sure was the intention of its author, actually). This is clearly not the way to do things, but it's also easy for a big R&D effort to slip into this sort of behavior, while all the time thinking that it's doing the right things for the right reasons. Stay alert! The lessons are the ones you'd expect:

An underlying theme that ran through the interviews with our project teams was how the need to maintain portfolio volume led to individual and team rewards being tied to project progression rather than 'truth-seeking' behaviour. The scientists and clinicians within the project teams need to believe that their personal success and careers are not intrinsically linked to project progression but to scientific quality, smart risk-taking and good decision-making.

But this is not the low energy state of a big organization. This sort of behavior has to be specifically encouraged and rewarded, or it will disappear, to be replaced by. . .well, you all know what it's replaced by. The sort of stuff detailed in the paper, and possibly even worse. What's frustrating is that none of these are new problems that AZ had to discover. I can bring up my own evidence from twelve years ago, and believe me, I was late to the party complaining about this sort of thing. Don't ever think that it can't happen some more.

Comments (46) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History

May 15, 2014

The Daily Show on Finding New Antibiotics

Email This Entry

Posted by Derek

A reader sent along news of this interview on "The Daily Show" with Martin Blaser of NYU. He has a book out, Missing Microbes, on the overuse of antibiotics and the effects on various microbiomes. And I think he's got a lot of good points - we should only be exerting selection pressure where we have to, not (for example) slapping triclosan on every surface because it somehow makes consumers feel "germ-free". And there are (and always have been) too many antibiotics dispensed for what turn out to be viral infections, for which they will, naturally, do no good at all and probably some harm.

But Dr. Blaser, though an expert on bacteria, does not seem to be an expert on discovering drugs to kill bacteria. I've generated a transcript of part of the interview, starting around the five-minute mark, which went like this:

Stewart: Isn't there some way, that, the antibiotics can be used to kill the strep, but there can be some way of rejuvenating the microbiome that was doing all those other jobs?

Blaser: Well, that's what we need to do. We need to make narrow-spectrum antibiotics. We have broad-spectrum, that attack everything, but we have the science that we could develop narrow-spectrum antibiotics that will just target the one organism - maybe it's strep, maybe it's a different organism - but then we need the diagnostics, so that somebody going to the doctor, they say "You have a virus" "You have a bacteria", if you have a bacteria, which one is it?

Stewart: Now isn't this where the genome-type projects are going? Because finding the genetic makeup of these bacteria, won't that allow us to target these things more specifically?

Blaser Yeah. We have so much genomic information - we can harness that to make better medicine. . .

Stewart: Who would do the thing you're talking about, come up with the targeted - is it drug companies, could it, like, only be done through the CDC, who would do that. . .

Blaser: That's what we need taxes for. That's our tax dollars. Just like when we need taxes to build the road that everybody uses, we need to develop the drugs that our kids and our grandkids are going to use so that these epidemics could be stopped.

Stewart: Let's say, could there be a Manhattan Project, since that's the catch-all for these types of "We're going to put us on the moon" - let's say ten years, is that a realistic goal?

Blaser: I think it is. I think it is. We need both diagnostics, we need narrow-spectrum agents, and we have to change the economic base of how we assess illness in kids and how we treat kids and how we pay doctors. . .

First off, from a drug discovery perspective, a narrow-spectrum antibiotic, one that kills only (say) a particular genus of bacterium, has several big problems: it's even harder to discover than a broader-spectrum agent, its market is much smaller, it's much harder to prescribe usefully, and its lifetime as a drug is shorter. (Other than that, it's fine). The reasons for these are as follows:

Most antibiotic targets are enzyme systems peculiar to bacteria (as compared to eukaryotes like us), but such targets are shared across a lot of bacteria. They tend to be aimed at things like membrane synthesis and integrity (bacterial membranes are rather different than those of animals and plants), or target features of DNA handling that are found in different forms due to bacteria having no nuclei, and so on. Killing bacteria with mechanisms that are also found in human cells is possible, but it's a rough way to go: a drug of that kind would be similar to a classic chemotherapy agent, killing the fast-dividing bacteria (in theory) just before killing the patient.

So finding a Streoptococcus-only drug is a very tall order. You'd have to find some target-based difference between those bacteria and all their close relatives, and I can tell you that we don't know enough about bacterial biochemistry to sort things out quite that well. Stewart brings up genomic efforts, and points to him for it, because that's a completely reasonable suggestion. Unfortunately, it's a reasonable suggestion from about 1996. The first complete bacterial genomes became available in the late 1990s, and have singularly failed to produce any new targeted antibiotics whatsoever. The best reference I can send people to is the GSK "Drugs For Bad Bugs" paper, which shows just what happened (and not just at GSK) to the new frontier of new bacterial targets. Update: see also this excellent overview. A lot of companies tried this, and got nowhere. It did indeed seem possible that sequencing bacteria would give us all sorts of new ways to target them, but that's not how it's worked out in practice. Blaser's interview gives the impression that none of this has happened yet, but believe me, it has.

The market for a narrow-spectrum agent would necessarily be smaller, by design, but the cost of finding it would (as mentioned above) be greater, so the final drug would have to cost a great deal per dose - more than health insurance would want to pay, given the availability of broad-spectrum agents at far lower prices. It could not be prescribed without positively identifying the infectious agent - which adds to the cost of treatment, too. Without faster and more accurate ways to do this (which Blaser rightly notes as something we don't have), the barriers to developing such a drug are even higher.

And the development of resistance would surely take such a drug out of usefulness even faster, since the resistance plasmids would only have to spread between very closely related bacteria, who are swapping genes at great speed. I understand why Blaser (and others) would like to have more targeted agents, so as not to plow up the beneficial microbiome every time a patient is treated, but we'd need a lot of them, and we'd need new ones all the time. This in a world where we can't even seem to discover the standard type of antibiotic.

And not for lack of trying, either. There's a persistent explanation for the state of antibiotic therapy that blames drug companies for supposedly walking away from the field. This has the cause and effect turned around. It's true that some of them have given up working in the area (along with quite a few other areas), but they left because nothing was working. The companies that stayed the course have explored, in great detail and at great expense, the problem that nothing much is working. If there ever was a field of drug discovery where the low-hanging fruit has been picked clean, it is antibiotic research. You have to use binoculars to convince yourself that there's any more fruit up there at all. I wish that weren't so, very much. But it is. Bacteria are hard to kill.

So the talk later on in the interview of spending some tax dollars and getting a bunch of great new antibiotics in ten years is, unfortunately, a happy fantasy. For one thing, getting a single new drug onto the market in only ten years from the starting pistol is very close to impossible, in any therapeutic area. The drug industry would be in much better shape if that weren't so, but here we are. In that section, Jon Stewart actually brings to life one of the reasons I have this blog: he doesn't know where drugs come from, and that's no disgrace, because hardly anyone else knows, either.

Comments (58) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Infectious Diseases

April 16, 2014

One and Done

Email This Entry

Posted by Derek

Matthew Herper has a good piece in Forbes on Robert Duggan and Pharmacyclics. In the course of it, we learn this interesting (and perhaps disturbing) bit of information:

Second acts in the biotech business are hard: 56% of the drug firms that received an FDA approval between 1950 and 2011 did so only once.

And I hate to say it, but the article does not inspire confidence in Duggan's ability to break that trend. It's surely no coincidence that the profile mentions in its first paragraph that he's a major donor to the Church of Scientology, and maybe it's just my own prejudices, but when I hear that, I'm pretty much done with thinking that a person can make rational decisions.

Comments (24) + TrackBacks (0) | Category: Drug Industry History

April 4, 2014

Ancient Modeling

Email This Entry

Posted by Derek

I really got a kick out of this picture that Wavefunction put up on Twitter last night. It's from a 1981 article in Fortune, and you'll just have to see the quality of the computer graphics to really appreciate it.

That sort of thing has hurt computer-aided drug design a vast amount over the years. It's safe to say that in 1981, Merck scientists did not (as the article asserts) "design drugs and check out their properties without leaving their consoles". It's 2014 and we can't do it like that yet. Whoever wrote that article, though, picked those ideas up from the people at Merck, with their fuzzy black-and-white monitor shots of DNA from three angles. (An old Evans and Sutherland terminal?) And who knows, some of the Merck folks may have even believed that they were close to doing it.

But computational power, for the most part, only helps you out when you already know how to calculate something. Then it does it for you faster. And when people are impressed (as they should be) with all that processing power can do for us now, from smart phones on up, they should still realize that these things are examples of fast, smooth, well-optimized versions of things that we know how to calculate. You could write down everything that's going on inside a smart phone with pencil and paper, and show exactly what it's working out when it display this pixel here, that pixel there, this call to that subroutine, which calculates the value for that parameter over there as the screen responds to the presence of your finger, and so on. It would be wildly tedious, but you could do it, given time. Someone, after all, had to program all that stuff, and programming steps can be written down.

The programs that drove those old DNA pictures could be written down, too, of course, and in a lot less space. But while the values for which pixels to light up on the CRT display were calculated exactly, the calculations behind those were (and are) a different matter. A very precise-looking picture can be drawn and animated of an animal that does not exist, and there are a lot of ways to draw animals that do not exist. The horse on your screen might look exact in every detail, except with a paisley hide and purple hooves (my daughter would gladly pay to ride one). Or it might have a platypus bill instead of a muzzle. Or look just like a horse from outside, but actually be filled with helium, because your program doesn't know how to handle horse innards. You get the idea.

The same for DNA, or a protein target. In 1981, figuring out exactly what happened as a transcription factor approached a section of DNA was not possible. Not to the degree that a drug designer would need. The changing conformation of the protein as it approaches the electrostatic field of the charged phosphate residues, what to do with the water molecules between the two as they come closer, the first binding event (what is it?) between the transcription factor and the double helix, leading to a cascade of tradeoffs between entropy and enthalpy as the two biomolecules adjust to each other in an intricate tandem dance down to a lower energy state. . .that stuff is hard. It's still hard. We don't know how to model some of those things well enough, and the (as yet unavoidable) errors and uncertainties in each step accumulate the further you go along. We're much better at it than we used to be, and getting better all the time, but there's a good way to go yet.

But while all that's true, I'm almost certainly reading too much into that old picture. The folks at Merck probably just put one of their more impressive-looking things up on the screen for the Fortune reporter, and hey, everyone's heard of DNA. I really don't think that anyone at Merck was targeting protein-DNA interactions 33 years ago (and if they were, they splintered their lance against that one, big-time). But the reporter came away with the impression that the age of computer-designed drugs was at hand, and in the years since, plenty of other people have seen progressively snazzier graphics and thought the same thing. And it's hurt the cause of modeling for them to think that, because the higher the expectations get, the harder it is to come back to reality.

Update: I had this originally as coming from a Forbes article; it was actually in Fortune.

Comments (22) + TrackBacks (0) | Category: Drug Industry History | In Silico

March 31, 2014

Where The Hot Drugs Come From: Somewhere Else

Email This Entry

Posted by Derek

Over at LifeSciVC, there's a useful look at how many drugs are coming into the larger companies via outside deals. As you might have guessed, the answer is "a lot". Looking at a Goldman Sachs list of "ten drugs that could transform the industry", Bruce Booth says:

By my quick review, it appears as though ~75% of these drugs originated at firms different from the company that owns them today (or owns most of the asset today) – either via in-licensing deal or via corporate acquisitions. Savvy business and corporate development strategies drove the bulk of the list. . .I suspect that in a review of the entire late stage industry pipeline, the imbalanced ratio of external:internal sourcing would largely be intact.

He has details on the ten drugs that Goldman is listing, and on the portfolios of several of the big outfits in the industry, and I think he's right. It would be very instructive to know what the failure rate, industry-wide, of inlicensed compounds like this might be. My guess is that it's still high, but not quite as high as the average for all programs. The inlicensed compounds have had, in theory, more than one set of eyes go over them, and someone had to reach into their wallet after seeing the data, so you'd think that they have to be in a little bit better shape. But a majority still surely fail, given that the industry's rate overall is close to 90% clinical failure (the math doesn't add up if you try to assume that the inlicensed failure rate is too low!)

Also of great interest is the "transformational" aspect. We can assume, I think, that most of the inlicensed compounds came from smaller companies - that's certainly how it looks on Bruce's list. This analysis suggested that smaller companies (and university-derived work) produced more innovative drugs than internal big-company programs, and these numbers might well be telling us the same thing.

This topic came up the last time I discussed a post from Bruce, and Bernard Munos suggested in 2009 that this might be the case as well. It's too simplistic to just say Small Companies Good, Big Companies Bad, because there are some real counterexamples to both of those assertions. But overall, averaged over the industry, there might be something to it.

Comments (26) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

March 20, 2014

Small Molecule Chemistry's "Limited Utility"?

Email This Entry

Posted by Derek

Over at LifeSciVC, guest blogger Jonathan Montagu talks about small molecules in drug discovery, and how we might move beyond them. Many of the themes he hits have come up around here, understandably - figuring why (and how) some huge molecules manage to have good PK properties, exploiting "natural-product-like" chemical space (again, if we can figure out a good way to do that), working with unusual mechanisms (allosteric sites, covalent inhibitors and probes), and so on. Well worth a read, even if he's more sanguine about structure-based drug discovery than I am. Most people are, come to think of it.

His take is very similar to what I've been telling people in my "state of drug discovery" presentations (at Illinois, most recently) - that we medicinal chemists need to stretch our definitions and move into biomolecule/small molecule hybrids and the like. These things need the techniques of organic chemistry, and we should be the people supplying them. Montagu goes even further than I do, saying that ". . .I believe that small molecule chemistry, as traditionally defined and practiced, has limited utility in today’s world." That may or may not be correct at the moment, but I'm willing to bet that it's going to become more and more correct in the future. We should plan accordingly.

Comments (31) + TrackBacks (0) | Category: Chemical Biology | Chemical News | Drug Development | Drug Industry History

March 10, 2014

Startups vs. Big Pharma

Email This Entry

Posted by Derek

Bruce Booth has opened up the number of authors who will be posting at LiveSciVC, and there's an interesting post up on startups now from Atlas Ventures' Mike Gilman. Edit: nope, my mistake. This is Bruce Booth's! Here are some of his conclusions:

here’s a list of a few of the perceived advantages of Pharma R&D today:

Almost unlimited access to all the latest technologies across drug discovery, ADME, toxicology, and clinical development, including all the latest capital equipment, compound libraries, antibody approaches, etc
International reach to support global clinical and regulatory processes to fully enable drug development programs
Deep and insightful commercial input into the markets, the pulse of the practicing physician, and the payors on what’s the right product profile
Gigantic cash flow streams that provide 15-20% of the topline to support a largely “block grant” model of R&D (fixing R&D spend to the percentage of sales)
Decades of institutional memory providing the scar tissue around what works and what doesn’t (e.g., insight into project attrition at massive scale)
This is a solid list of advantages, and they all have real merit.

But like the biblical Goliath, whose size and strength appeared to the Israelites as great advantages, they are also the roots of Pharma’s disadvantages. All of these derive their value as inward and relatively insular forces. Institutional memory in particular can serve to either unlock better paths to innovation or to stifle those that want to explore new ways of doing things. Lipinski’s Rules, hERG liabilities, and other candidate guidelines derived from legacy “survivor bias”-style analyses are case examples of this tension – unfortunately the stifling aspects rather than the unlocking ones often triumph in big firms.

Further, these impressive corporate R&D “advantages” are of course the product of Big Pharma’s path-dependency: single blockbuster successes discovered in the ‘60s-70s led to early mergers in the ‘80-90s, and bigger mega-mergers in the late 90s-00s, to form the organizations of today. Bigger and bigger R&D budgets buying up more and more “things” in the quest for improved productivity. In a sense, the growth drivers underlying these mergers acted like the excessive hGH coming from Goliath’s pituitary – the scale and constant growth pressure was a product of a disease, not a design.

He makes the point earlier on that constraints on spending, while they may not feel like a good thing, may actually be one. More money and resources often leads to box-checking behavior and a feeling of "Since we can do this, we should". There's some institutional political stuff going on there, of course - if you've checked off all the boxes that everyone agrees are needed for success, and you still don't succeed, then it can't be your fault. Or anyone's. That's not to say that all failures have to be someone's fault, but this sort of thing obscures those times when there's actual blame to go around.

The post also goes into another related problem: if you have all these resources, that you've paid for (and are continuing to pay for to keep running), then if they're not being used, things look like they're being wasted. They probably are being wasted. So stuff gets shoveled on, to keep everything running at all times. It's certainly in the interest of the people in those areas to keep working (and to be seen to be keeping working). It's in the interest of the people who manage those areas, and of the ones who advocated for bringing in whatever process or technology. But these can be perverse incentives.

The main problem I have with the post is the opening analogy to the recent Mars mission launched by India. I have to salute the people behind the Mangalyaan mission - it's a real accomplishment, and if it works, India will be only the fourth nation (or group of nations) to reach Mars. But going on about how cheaply it was done compared to the simultaneous MAVEN mission from the US isn't a good comparison. Yes, the Indian mission is eight times cheaper. But it has one quarter the payload, and is targeted to last about half as long, and that's leaving out any consideration of the actual instrumental capabilities. It's also worth noting that the primary goal of the Mangalyaan mission is to demonstrate that India can pull it off; any data from Mars are (officially) secondary. I'd find the arguments about small and large Pharma more convincing without this comparison, to be honest.

But the larger point stands: if you had to start discovering drugs from scratch, knowing what's happened to other, larger organizations, are there things you would do differently? Emphasize more? Avoid altogether? A startup allows you to put these ideas into practice. Retrofitting them onto a larger, older company is nearly impossible.

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

February 24, 2014

Big Drug Mergers: So They're OK, Right?

Email This Entry

Posted by Derek

Several people sent along this article from McKinsey Consulting, on "Why pharma megamergers work". They're looking (as you would expect) at shareholder value, shareholder value, and shareholder value as the main measurements of whether a deal "worked" or not. But John LaMattina, who lived through the Pfizer megamerger era and had a ringside seat, would like to differ with their analysis:

The disruption that the integration process causes is immeasurable. Winners and losers are created as a result of the leader selection process. Scars form as different research teams battle over which projects are superior to others. The angst even extends to one’s home life as people worry if their site will be closed and that they’ll be unemployed or, at best, be asked to uproot their families halfway across the country to a new research location. In such a situation, rumors are rife and speculation rampant. Focus that should be on science inevitably get diverted to one’s personal situation. This isn’t something that lasts just a few weeks. Often the integration process can take as much as a year.

The impact of these changes are not immediate. Rather, they take some years to become apparent. The Pfizer pipeline of experimental medicines, as published on its website, is about 60% of its peak about a decade ago, despite these acquisitions. Clearly, a company’s success isn’t assured by numbers, but one’s chances are enhanced by more R&D opportunities. I would argue these mergers have taken a toll on the R&D organization that wasn’t anticipated a decade ago.

Well, there have been naysayers along the way. "I think the Pfizer-Wyeth merger is a bad idea which will do bad things". "I'm deeply skeptical" is a comment from 2002. And here's 2008: "Pfizer is going to be having a rough time of it for years to come".

But here's where McKinsey's worldview comes in. Look at that last statement of mine, from 2008. If you just look at the stock since that date, well, I've been full of crap, haven't I? PFE has definitely outperformed the S&P 500 since the summer of 2008, and especially since mid-2011. There's your shareholder value right there, and what else is there in this life? But what might they have done, and what might the companies that they bought and pillaged have done, over the years? We'll never know. Things that don't happen, drugs that don't get discovered - they make no sound at all.

Comments (37) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

February 11, 2014

Drug Discovery in India

Email This Entry

Posted by Derek

Molecular biologist Swapnika Ramu, a reader from India, sends along a worthwhile (and tough) question. She says that after her PhD (done in the US), her return to India has made her "less than optimistic" about the current state of drug discovery there. (Links in the below quote have been added by me, not her:

Firstly, there isn't much by way of new drug development in India. Secondly, as you have discussed many times on your blog. . .drug pricing in India remains highly contentious, especially with the recent patent disputes. Much of the public discourse descends into anti-big pharma rhetoric, and there is little to no reasoned debate about how such issues should be resolved. . .

I would like to hear your opinion on what model of drug discovery you think a developing nation like India should adopt, given the constraints of finance and a limited talent pool. Target-based drug discovery was the approach that my previous company adopted, and not surprisingly this turned out to be a very expensive strategy that ultimately offered very limited success. Clearly, India cannot keep depending upon Western pharma companies to do all the heavy lifting when it comes to developing new drugs, simply to produce generic versions for the Indian public. The fact that several patents are being challenged in Indian courts would make pharma skittish about the Indian market, which is even more of a concern if we do not have a strong drug discovery ecosystem of our own. Since there isn't a robust VC-based funding mechanism, what do you think would be a good approach to spurring innovative drug discovery in the Indian context?

Well, that is a hard one. My own opinion is that India only has a limited talent pool as compared to Western Europe or the US - the country still has a lot more trained chemists and biologists than most other places. It's true, though, that the numbers don't tell the story very well. The best people from India are very, very good, but there are (from what I can see) a lot of poorly trained ones with degrees that seem (at least to me) worth very little. Still, you've still got a really substantial number of real scientists, and I've no doubt that India could have several discovery-driven drug companies if the financing were easier to come by (and the IP situation a bit less murky - those two factors are surely related). Whether it would have those, or even should, is another question.

As has been clear for a while, the Big Pharma model has its problems. Several players are in danger of falling out of the ranks (Lilly, AstraZeneca), and I don't really see anyone rising up to replace them. The companies that have grown to that size in the last thirty years mostly seem to be biotech-driven (Amgen, Biogen, Genentech as was, etc.)

So is that the answer? Should Indian companies try to work more in that direction than in small molecule drugs? Problem is, the barriers to entry in biotech-derived drugs are higher, and that strategy perhaps plays less to the country's traditional strengths in chemistry. But in the same way that even less-developed countries are trying to skip over the landline era of telephones and go straight to wireless, maybe India should try skipping over small molecules. I do hate to write that, but it's not a completely crazy suggestion.

But biomolecule or small organic, to get a lot of small companies going in India (and you would need a lot, given the odds) you would need a VC culture, which isn't there yet. The alternative (and it's doubtless a real temptation for some officials) would be for the government to get involved to try to start something, but I would have very low hopes for that, especially given the well-known inefficiencies of the Indian bureaucracy.

Overall, I'm not sure if there's a way for most countries not to rely on foreign companies for most (or all) of the new drugs that come along. Honestly, the US is the only country in the world that might be able to get along with only its own home-discovered pharmacopeia, and it would still be a terrible strain to lose the European (and Japanese) discoveries. Even the likes of Japan, Switzerland, and Germany use, for the most part, drugs that were discovered outside their own countries.

And in the bigger picture, we might be looking at a good old Adam Smith-style case of comparative advantage. It sure isn't cheap to discover a new drug in Boston, San Francisco, Basel, etc., but compared to the expense of getting pharma research in Hyderabad up to speed, maybe it's not quite as bad as it looks. In the longer term, I think that India, China, and a few other countries will end up with more totally R&D-driven biomedical research companies of their own, because the opportunities are still coming along, discoveries are still being made, and there are entrepreneurial types who may well feel like taking their chances on them. But it could take a long longer than some people would like, particularly researchers (like Swapnika Ramu) who are there right now. The best hope I can offer is that Indian entrepreneurs should keep their eyes out for technologies and markets that are new enough (and unexplored enough) so that they're competing on a more level playing field. Trying to build your own Pfizer is a bad idea - heck, the people who built Pfizer seem to be experiencing buyer's remorse themselves.

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

January 10, 2014

A New Look At Clinical Attrition

Email This Entry

Posted by Derek

Thanks to this new article in Nature Biotechnology, we have recent data on the failure rates in drug discovery. Unfortunately, this means that we have recent data on the failure rates in drug discovery, and the news is not good.

The study is the largest and most recent of its kind, examining success rates of 835 drug developers, including biotech companies as well as specialty and large pharmaceutical firms from 2003 to 2011. Success rates for over 7,300 independent drug development paths are analyzed by clinical phase, molecule type, disease area and lead versus nonlead indication status. . .Unlike many previous studies that reported clinical development success rates for large pharmaceutical companies, this study provides a benchmark for the broader drug development industry by including small public and private biotech companies and specialty pharmaceutical firms. The aim is to incorporate data from a wider range of clinical development organizations, as well as drug modalities and targets. . .

To illustrate the importance of using all indications to determine success rates, consider this scenario. An antibody is developed in four cancer indications, and all four indications transition successfully from phase 1 to phase 3, but three fail in phase 3 and only one succeeds in gaining FDA approval. Many prior studies reported this as 100% success, whereas our study differentiates the results as 25% success for all indications, and 100% success for the lead indication. Considering the cost and time spent on the three failed phase 3 indications, we believe including all 'development paths' more accurately reflects success and R&D productivity in drug development.

So what do they find? 10% of all indications in Phase I eventually make it through the FDA, which is in line with what most people think. Failure rates are in the thirty-percent range in Phase I, in the 60-percent range in Phase II, thirty to forty percent in Phase III, and in the teens at the NDA-to-approval stage. Broken out by drug class (antibody, peptide, small molecule, vaccine, etc.), the class with the most brutal attrition is (you guessed it) small molecules: slightly over 92% of them entering Phase I did not make it to approval.

If you look at things by therapeutic area, oncology has the roughest row to hoe with over 93% failure. Its failure rate is still over 50% in Phase III, which is particularly hair-raising. Infectious disease, at the other end of the scale, is merely a bit over 83%. Phase II is where the different diseases really separate out by chance of success, which makes sense.

Overall, this is a somewhat gloomier picture than we had before, and the authors have reasonable explanations for it:

Factors contributing to lower success rates found in this study include the large number of small biotech companies represented in the data, more recent time frame (2003–2011) and higher regulatory hurdles for new drugs. Small biotech companies tend to develop riskier, less validated drug classes and targets, and are more likely to have less experienced development teams and fewer resources than large pharmaceutical corporations. The past nine-year period has been a time of increased clinical trial cost and complexity for all drug development sponsors, and this likely contributes to the lower success rates than previous periods. In addition, an increasing number of diseases have higher scientific and regulatory hurdles as the standard of care has improved over the past decade.

So there we have it - if anyone wants numbers, these are the numbers. The questions are still out there for all of us, though: how sustainable is a business with these kinds of failure rates? How feasible are the pricing strategies that can accommodate them? And what will break out out of this system, anyway?

Comments (12) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History

December 4, 2013

The Old AstraZeneca Charnwood Site

Email This Entry

Posted by Derek

Trying to find tenants for the former AstraZeneca campus at Charnwood. A few buildings are being demolished to make room, and they're hoping for biomedical researchers to move in. I hope that works; it seems like a good research site. I'm not sure that trying to sell it as ". . .perfectly located between Leicester, Nottingham and Derby" is as good a pitch as can be made, but there are worse ones.

Comments (19) + TrackBacks (0) | Category: Drug Industry History

December 3, 2013

Merck's Drug Development in The New Yorker

Email This Entry

Posted by Derek

The New Yorker has an article about Merck's discovery and development of suvorexant, their orexin inhibitor for insomnia. It also goes into the (not completely reassuring) history of zolpidem (known under the brand name of Ambien), which is the main (and generic) competitor for any new sleep drug.

The piece is pretty accurate about drug research, I have to say:

John Renger, the Merck neuroscientist, has a homemade, mocked-up advertisement for suvorexant pinned to the wall outside his ground-floor office, on a Merck campus in West Point, Pennsylvania. A woman in a darkened room looks unhappily at an alarm clock. It’s 4 a.m. The ad reads, “Restoring Balance.”

The shelves of Renger’s office are filled with small glass trophies. At Merck, these are handed out when chemicals in drug development hit various points on the path to market: they’re celebrations in the face of likely failure. Renger showed me one. Engraved “MK-4305 PCC 2006,” it commemorated the day, seven years ago, when a promising compound was honored with an MK code; it had been cleared for testing on humans. Two years later, MK-4305 became suvorexant. If suvorexant reaches pharmacies, it will have been renamed again—perhaps with three soothing syllables (Valium, Halcion, Ambien).

“We fail so often, even the milestones count for us,” Renger said, laughing. “Think of the number of people who work in the industry. How many get to develop a drug that goes all the way? Probably fewer than ten per cent.”

I well recall when my last company closed up shop - people in one wing were taking those things and lining them up out on a window shelf in the hallway, trying to see how far they could make them reach. Admittedly, they bulked out the lineup with Employee Recognition Awards and Extra Teamwork awards, but there were plenty of oddly shaped clear resin thingies out there, too.

The article also has a good short history of orexin drug development, and it happens just the way I remember it - first, a potential obesity therapy, then sleep disorders (after it was discovered that a strain of narcoleptic dogs lacked functional orexin receptors).

Mignot recently recalled a videoconference that he had with Merck scientists in 1999, a day or two before he published a paper on narcoleptic dogs. (He has never worked for Merck, but at that point he was contemplating a commercial partnership.) When he shared his results, it created an instant commotion, as if he’d “put a foot into an ants’ nest.” Not long afterward, Mignot and his team reported that narcoleptic humans lacked not orexin receptors, like dogs, but orexin itself. In narcoleptic humans, the cells that produce orexin have been destroyed, probably because of an autoimmune response.

Orexin seemed to be essential for fending off sleep, and this changed how one might think of sleep. We know why we eat, drink, and breathe—to keep the internal state of the body adjusted. But sleep is a scientific puzzle. It may enable next-day activity, but that doesn’t explain why rats deprived of sleep don’t just tire; they die, within a couple of weeks. Orexin seemed to turn notions of sleep and arousal upside down. If orexin turns on a light in the brain, then perhaps one could think of dark as the brain’s natural state. “What is sleep?” might be a less profitable question than “What is awake?”

There's also a lot of good coverage of the drug's passage through the FDA, particularly the hearing where the agency and Merck argued about the dose. (The FDA was inclined towards a lower 10-mg tablet, but Merck feared that this wouldn't be enough to be effective in enough patients, and had no desire to launch a drug that would get the reputation of not doing very much).

few weeks later, the F.D.A. wrote to Merck. The letter encouraged the company to revise its application, making ten milligrams the drug’s starting dose. Merck could also include doses of fifteen and twenty milligrams, for people who tried the starting dose and found it unhelpful. This summer, Rick Derrickson designed a ten-milligram tablet: small, round, and green. Several hundred of these tablets now sit on shelves, in rooms set at various temperatures and humidity levels; the tablets are regularly inspected for signs of disintegration.

The F.D.A.’s decision left Merck facing an unusual challenge. In the Phase II trial, this dose of suvorexant had helped to turn off the orexin system in the brains of insomniacs, and it had extended sleep, but its impact didn’t register with users. It worked, but who would notice? Still, suvorexant had a good story—the brain was being targeted in a genuinely innovative way—and pharmaceutical companies are very skilled at selling stories.

Merck has told investors that it intends to seek approval for the new doses next year. I recently asked John Renger how everyday insomniacs would respond to ten milligrams of suvorexant. He responded, “This is a great question.”

There are, naturally, a few shots at the drug industry throughout the article. But it's not like our industry doesn't deserve a few now and then. Overall, it's a good writeup, I'd say, and gets across the later stages of drug development pretty well. The earlier stages are glossed over a bit, by comparison. If the New Yorker would like for me to tell them about those parts sometime, I'm game.

Comments (28) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System

November 25, 2013

Lipinski's Anchor

Email This Entry

Posted by Derek

Michael Shultz of Novartis is back with more thoughts on how we assign numbers to drug candidates. Previously, he's written about the mathematical wrongness of many of the favorite metrics (such as ligand efficiency), in a paper that stirred up plenty of comment.

His new piece in ACS Medicinal Chemistry Letters is well worth a look, although I confess that (for me) it seemed to end just when it was getting started. But that's the limitation of a Viewpoint article for a subject with this much detail in it.

Shultz makes some very good points by referring to Daniel Kahneman's Thinking, Fast and Slow, a book that's come up several times around here as well (in both posts and comments). The key concept here is called "attribute substitution", which is the mental process by which we take a complex situation, which we find mentally unworkable, and try to substitute some other scheme which we can deal with. We then convince ourselves, often quickly, silently, and without realizing that we're doing it, that we now have a handle on the situation, just because we now have something in our heads that is more understandable. That "Ah, now I get it" feeling is often a sign that you're making headway on some tough subject, but you can also get it when you're understanding something that doesn't help you with it at all.

And I'd say that this is the take-home for this whole Viewpoint article, that we medicinal chemists are fooling ourselves when we use ligand efficiency and similar metrics to try to understand what's going on with our drug candidates. Shultz go on to discuss what he calls "Lipinski's Anchor". Anchoring is another concept out of Thinking Fast and Slow, and here's the application:

The authors of the ‘rules of 5’ were keenly aware of their target audience (medicinal chemists) and “deliberately excluded equations and regression coefficients...at the expense of a loss of detail.” One of the greatest misinterpretations of this paper was that these alerts were for drug-likeness. The authors examined the World Drug Index (WDI) and applied several filters to identify 2245 drugs that had at least entered phase II clinical development. Applying a roughly 90% cutoff for property distribution, the authors identified four parameters (MW, logP, hydrogen bond donors, and hydrogen bond acceptors) that were hypothesized to influence solubility and permeability based on their difference from the remainder of the WDI. When judging probability, people rely on representativeness heuristics (a description that sounds highly plausible), while base-rate frequency is often ignored. When proposing oral drug-like properties, the Gaussian distribution of properties was believed, de facto, to represent the ability to achieve oral bioavailability. An anchoring effect is when a number is considered before estimating an unknown value and the original number significantly influences future estimates. When a simple, specific, and plausible MW of 500 was given as cutoff for oral drugs, this became the mother of all medicinal chemistry anchors.

But how valid are molecular weight cutoffs, anyway? That's a topic that's come up around here a few times, too, as well it should. Comparisons of the properties of orally available drugs across their various stages of development seem to suggest that such measurements converge on what we feel are the "right" values, but as Shultz points out, there could be other reasons for the data to look that way. And he makes this recommendation: "Since the average MW of approved oral drugs has been increasing while the failure rate due to PK/biovailability has been decreasing, the hypothesis linking size and bioavailability should be reconsidered."

I particularly like another line, which could probably serve as the take-home message for the whole piece: "A clear understanding of probabilities in drug discovery is impossible due to the large number of known and unknown variables." I agree. And I think that's the root of the problem, because a lot of people are very, very uncomfortable with that kind of talk. The more business-school training they have, the less they like the sound of it. The feeling is that if we'd just use modern management techniques, it wouldn't have to be this way. Closer to the science end of things, the feeling is that if we'd just apply the right metrics to our work, it wouldn't have to be that way, either. Are both of these mindsets just examples of attribute substitution at work?

In the past, I've said many times that if I had to work from a million compounds that were within rule-of-five cutoffs versus a million that weren't, I'd go for the former every time. And I'm still not ready to ditch that bias, but I'm certainly ready to start running up the Jolly Roger about things like molecular weight. I still think that the clinical failure rate is higher for significantly greasier compounds (both because of PK issues and because of unexpected tox). But molecular weight might not be much of a proxy for the things we care about.

This post is long enough already, so I'll address Shultz's latest thoughts on ligand efficiency in another entry. For those who want more 50,000-foot viewpoints on these issues, though, these older posts will have plenty.

Comments (44) + TrackBacks (0) | Category: Drug Development | Drug Industry History

November 19, 2013

Phenotypic Screening Everywhere

Email This Entry

Posted by Derek

The Journal of Biomolecular Screening has a new issue devoted to phenotypic and functional screening approaches, and there looks to be some interesting material in there. The next issue will be Part II (they got so many manuscripts that the intended single issue ran over), and it all seems to have been triggered by the 2011 article in Nature Reviews Drug Discovery that I blogged about here. The Society for Laboratory Automation and Screening set up a special interest group for phenotypic drug discovery after that paper came out, and according to the lead editorial in this new issue, it quickly grew to become the largest SIG and one of the most active.
F2.large.jpg
The reason for this might well be contained in the graphic shown, which is based on data from Bernard Munos. I'm hoping that those historical research spending numbers have been adjusted for inflation, but I believe that they have (since they were in Munos's original paper).

There's an update to the original Swinney and Anthony NRDD paper in this issue, too, and I'll highlight that in another post.

Comments (29) + TrackBacks (0) | Category: Drug Assays | Drug Industry History

November 12, 2013

Leaving Antibiotics: An Interview

Email This Entry

Posted by Derek

Here's the (edited) transcript of an interview that Pfizer's VP of clinical research, Charles Knirsch, gave to PBS's Frontline program. The subject was the rise of resistant bacteria - which is a therapeutic area that Pfizer is no longer active in.

And that's the subject of the interview, or one of its main subjects. I get the impression that the interviewer would very much like to tell a story about how big companies walked away to let people die because they couldn't make enough money off of them:

. . .If you look at the course of a therapeutic to treat pneumonia, OK, … we make something, a macrolide, that does that. It’s now generic, and probably the whole course of therapy could cost $30 or $35. Even when it was a branded antibiotic, it may have been a little bit more than that.

So to cure pneumonia, which in some patient populations, particularly the elderly, has a high mortality, that’s what people are willing to pay for a therapeutic. I think that there are differences across different therapeutic areas, but for some reason, with antibacterials in particular, I think that society doesn’t realize the true value.

And did it become incumbent upon you at some point to make choices about which things would be in your portfolio based on this?

Based on our scientific capabilities and the prudent allocation of capital, we do make these choices across the whole portfolio, not just with antibacterials.

But talk to me about the decision that went into antibacterials. Pfizer made a decision in 2011 and announced the decision. Obviously you were making choices among priorities. You had to answer to your shareholders, as you’ve explained, and you shifted. What went into that decision?

I think that clearly our vaccine platforms are state of the art. Our leadership of the vaccine group are some of the best people in the industry or even across the industry or anywhere really. We believe that we have a higher degree of success in those candidates and programs that we are currently prosecuting.

So it’s a portfolio management decision, and if our vaccine for Clostridium difficile —

A bacteria.

Yeah, a bacteria which is a major cause of both morbidity and mortality of patients in hospitals, the type of thing that I would have been consulted on as an infectious disease physician, that in fact we will prevent that, and we’ll have a huge impact on human health in the hospitals.

But did that mean that you had to close down the antibiotic thing to focus on vaccines? Why couldn’t you do both?

Oh, good question. And it’s not a matter of closing down antibiotics. We were having limited success. We had had antibiotics that we would get pretty far along, and a toxicity would emerge either before we even went into human testing or actually in human testing that would lead to discontinuation of those programs. . .

It's that last part that I think is insufficiently appreciated. Several large companies have left the antibiotic field over the years, but several stayed (GlaxoSmithKline and AstraZeneca come to mind). But the ones who stayed were not exactly rewarded for their efforts. Antibacterial drug discovery, even if you pour a lot of money and effort into it, is very painful. And if you're hoping to introduce a mechanism of action into the field, good luck. It's not impossible, but if it were easy to do, more small companies would have rushed in to do it.

Knirsch doesn't have an enviable task here, because the interviewer pushes him pretty hard. Falling back on the phrase "portfolio management decisions" doesn't help much, though:

In our discussion today, I get the sense that you have to make some very ruthless decisions about where to put the company’s capital, about where to invest, about where to put your emphasis. And there are whole areas where you don’t invest, and I guess the question we’re asking is, do you learn lessons about that? When you pulled out of Gram-negative research like that and shifted to vaccines, do you look back on that and say, “We learned something about this”?

These are not ruthless decisions. These are portfolio decisions about how we can serve medical need in the best way. …We want to stay in the business of providing new therapeutics for the future. Our investors require that of us, I think society wants a Pfizer to be doing what we do in 20 years. We make portfolio management decisions.

But you didn’t stay in this field, right? In Gram negatives you didn’t really stay in that field. You told me you shifted to a new approach.

We were not having scientific success, there was no clear regulatory pathway forward, and the return on any innovation did not appear to be something that would support that program going forward.

Introducing the word "ruthless" was a foul, and I'm glad the whistle was blown. I might have been tempted to ask the interviewer what it meant, ruthless, and see where that discussion went. But someone who gives in to temptations like that probably won't make VP at Pfizer.

Comments (51) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Infectious Diseases

November 11, 2013

The Past Twenty Years of Drug Development, Via the Literature

Email This Entry

Posted by Derek

Here's a new paper in PlOSOne on drug development over the past 20 years. The authors are using a large database of patents and open literature publications, and trying to draw connections between those two, and between individual drug targets and the number of compounds that have been disclosed against them. Their explanation of patents and publications is a good one:

. . .We have been unable to find any formal description of the information flow between these two document types but it can be briefly described as follows. Drug discovery project teams typically apply for patents to claim and protect the chemical space around their lead series from which clinical development candidates may be chosen. This sets the minimum time between the generation of data and its disclosure to 18 months. In practice, this is usually extended, not only by the time necessary for collating the data and drafting the application but also where strategic choices may be made to file later in the development cycle to maximise the patent term. It is also common to file separate applications for each distinct chemical series the team is progressing.

While some drug discovery operations may eschew non-patent disclosure entirely, it is nevertheless common practice (and has business advantages) for project teams to submit papers to journals that include some of the same structures and data from their patents. While the criteria for inventorship are different than for authorship, there are typically team members in-common between the two types of attribution. Journal publications may or may not identify the lead compound by linking the structure to a code name, depending on how far this may have progressed as a clinical candidate.

The time lag can vary between submitting manuscripts immediately after filing, waiting until the application has published, deferring publication until a project has been discontinued, or the code name may never be publically resolvable to a structure. A recent comparison showed that 6% of compound structures exemplified in patents were also published in journal articles. While the patterns described above will be typical for pharmaceutical and biotechnology companies, the situation in the academic sector differs in a number of respects. Universities and research institutions are publishing increasing numbers of patents for bioactive compounds but their embargo times for publication and/or upload of screening results to open repositories, such as PubChem BioAssay, are generally shorter.

There are also a couple of important factors to keep in mind during the rest of the analysis. The authors point out that their database includes a substantial number of "compounds" which are not small, drug-like molecules (these are antibodies, proteins, large natural products, and so on). (In total, from 1991 to 2010 they have about one million compounds from journal articles and nearly three million from patents). And on the "target" side of the database, there are a significant number of counterscreens included which are not drug targets as such, so it might be better to call the whole thing a compound-to-protein mapping exercise. That said, what did they find?
compounds%20targets%20year%20chart.png
Here's the chart of compounds/target, by year. The peak and decline around 2005 is quite noticeable, and is corroborated by a search through the PCT patent database, which shows a plateau in pharmaceutical patents around this time (which has continued until now, by the way).

Looking at the target side of things, with those warnings above kept in mind, shows a different picture. The journal-publication side of things really has shown an increase over the last ten years, with an apparent inflection point in the early 2000s. What happened? I'd be very surprised if the answer didn't turn out to be genomics. If you want to see the most proximal effect of the human genomics frenzy from around that time, there you have it in the way that curve bends around 2001. Year-on-year, though (see the full paper for that chart), the targets mentioned in journal publications seem to have peaked in 2008 or so, and have either plateaued or actually started to come back down since then. Update: Fixed the second chart, which had been a duplicate of the first).
targets%20source%20year.png
The authors go on to track a number of individual targets by their mentions in patents and journals, and you can certainly see a lot of rise-and-fall stories over the last 20 years. Those actual years should not be over-interpreted, though, because of the delays (mentioned above) in patenting, and the even longer delays, in some cases, for journal publication from inside pharma organizations.

So what's going on with the apparent decline in output? The authors have some ideas, as do (I'm sure) readers of this site. Some of those ideas probably overlap pretty well:

While consideration of all possible causative factors is outside the scope of this work it could be speculated that the dominant causal effect on global output is mergers and acquisition activity (M&A) among pharmaceutical companies. The consequences of this include target portfolio consolidations and the combining of screening collections. This also reduces the number of large units competing in the production of medicinal chemistry IP. A second related factor is less scientists engaged in generating output. Support for the former is provided by the deduction that NME output is directly related to the number of companies and for the latter, a report that US pharmaceutical companies are estimated to have lost 300,000 jobs since 2000. There are other plausible contributory factors where finding corroborative data is difficult but nonetheless deserve comment. Firstly, patent filing and maintenance costs will have risen at approximately the same rate as compound numbers. Therefore part of the decrease could simply be due to companies, quasi-synchronously, reducing their applications to control costs. While this happened for novel sequence filings over the period of 1995–2000, we are neither aware any of data source against which this hypothesis could be explicitly tested for chemical patenting nor any reports that might support it. Similarly, it is difficult to test the hypothesis of resource switching from “R” to “D” as a response to declining NCE approvals. Our data certainly infer the shrinking of “R” but there are no obvious metrics delineating a concomitant expansion of “D”. A third possible factor, a shift in the small-molecule:biologicals ratio in favour of the latter is supported by declared development portfolio changes in recent years but, here again, proving a causative coupling is difficult.

Causality is a real problem in big retrospectives like this. The authors, as you see, are appropriately cautious. (They also mention, as a good example, that a decline in compounds aimed at a particular target can be a signal of both success and of failure). But I'm glad that they've made the effort here. It looks like they're now analyzing the characteristics of the reported compounds with time and by target, and I look forward to seeing the results of that work.

Update: here's a lead author of the paper with more in a blog post.

Comments (22) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Patents and IP | The Scientific Literature

October 31, 2013

Merck's Aftermath

Email This Entry

Posted by Derek

So the picture that's emerging of Merck's drug discovery business after this round of cuts is confused, but some general trends seem to be present. West Point appears to have been very severely affected, with a large number of chemists shown the door, and reports tend to agree that bench chemists were disproportionately hit. The remaining department would seem to be top-heavy with managers.

Top-heavy, that is, unless the idea is that they're all going to be telling cheaper folks overseas what to make, that is. So is Merck going over to the Pfizer-style model? I regard this as unproven on this scale. In fact, I have an even lower opinion of it than that, but I'm sure that my distaste for the idea is affecting my perceptions, so I have to adjust accordingly. (Not everything you dislike is incorrect, just as not every person that's annoying is wrong).

But it's worth realizing that this is a very old idea. It's Taylorism, after Frederick Taylor, whose thinking was very influential in business circles about 100 years ago. (That Wikipedia article is written in a rather opinionated style, which the site has flagged, but it's a very interesting read and I recommend it). One of Taylor's themes was division of labor between the people thinking about the job and the people doing it, and a clearer statement of what Pfizer (and now Merck) are trying to do is hard to come by.

The problem is, we are not engaged in the kind of work that Taylorism and its descendants have been most successfully applied to. That, of course, is assembly line work, or any work flow that consists of defined, optimizable processes. R&D has proven. . .resistant to such thinking, to put it mildly. It's easy to convince yourself that drug discovery consists of and should be broken up into discrete assembly-line units, but somehow the cranks don't turn very smoothly when such systems are built. Bits and pieces of the process can be smoothed out and improved, but the whole thing still seems tangled, somehow.

In fact, if I can use an analogy from the post I put up earlier this morning, it reminds me of the onset of turbulence from a regime of laminar flow. If you model the kinds of work being done in some sort of hand-waving complexity space, up to a point, things run smoothly and go where they're supposed to. But as you start to add in key steps where the driving forces, the real engines of progress, are things that have to be invented afresh each time and are not well understood to start with, then you enter turbulence. The workflow become messy and unpredictable. If your Reynolds numbers are too high, no amount of polish and smoothing will stop you from seeing turbulent flow. If your industrial output depends too much on serendipity, on empiricism, and on mechanisms that are poorly understood, then no amount of managerial smoothing will make things predictable.

This, I think, is my biggest problem with the "Outsource the grunt work and leave the planning to the higher-ups" idea. It assumes that things work more smoothly than they really do in this business. I'm also reminded a bit of the Chilean "Project Cybersyn", which was to be a sort of control room where wise planners could direct the entire country's economy. One of the smaller reasons to regret the 1973 coup against Allende is that the chance was missed to watch this system bang up against reality. And I wonder what will happen as this latest drug discovery scheme runs into it, too.

Update: a Merck employee says in the comments that there hasn't been talk of more outsourcing, If that proves to be the case, then just apply the above comments to Pfizer.

Comments (98) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Life in the Drug Labs

October 22, 2013

Size Doesn't Matter. Does Anything?

Email This Entry

Posted by Derek

There's a new paper in Nature Reviews Drug Discovery that tries to find out what factors about a company influence its research productivity. This is a worthy goal, but one that's absolutely mined with problems in gathering and interpreting the data. The biggest one is the high failure rate that afflicts everyone in the clinic: you could have a company that generates a lot of solid ideas, turns out good molecules, gets them into humans with alacrity, and still ends up looking like a failure because of mechanistic problems or unexpected toxicity. You can shorten those odds, for sure (or lengthen them!), but you can never really get away from that problem, or not yet.

The authors have a good data set to work from, though:

It is commonly thought that small companies have higher research and development (R&D) productivity compared with larger companies because they are less bureaucratic and more entrepreneurial. Indeed, some analysts have even proposed that large companies exit research altogether. The problem with this argument is that it has little empirical foundation. Several high-quality analyses comparing the track record of smaller biotechnology companies with established pharmaceutical companies have concluded that company size is not an indicator of success in terms of R&D productivity1, 2.

In the analysis presented here, we at The Boston Consulting Group examined 842 molecules over the past decade from 419 companies, and again found no correlation between company size and the likelihood of R&D success. But if size does not matter, what does?

Those 842 molecules cover the period 2002-2011, and of them, 205 made it to regulatory approval. (Side note: does this mean that the historical 90% failure rate no longer applies? Update: turns out that's the number of compounds that made it through Phase I, which sounds more like it). There were plenty of factors that seemed to have no discernable influence on success - company size, as mention, public versus private financing, most therapeutic area choices, market size for the proposed drug or indication, location in the US, Europe, or Asia, and so on. In all these cases, the size of the error bars leave one unable to reject the null hypothesis (variation due to chance alone).

What factors do look like more than chance? The far ends of the therapeutic area choice, for one (CNS versus infectious disease, and these two only). But all the other indicators are a bit fuzzier. Publications (and patents) per R&D dollar spent are a positive sign, as is the experience (time-in-office) of the R&D heads. A higher termination rate in preclinical and Phase I correlated with eventual success, although I wonder if that's also a partial proxy for desperation, companies with no other option but to push on and hope for the best (see below for more on this point). A bit weirdly, frequent mention of ROI and the phrase "decision making" actually correlated positively, too.

The authors interpret most or all of these as proxy measurements of "scientific acumen and good judgement", which is a bit problematic. It's very easy to fall into circular reasoning that way - you can tell that the companies that succeeded had good judgement, because their drugs succeeded, because of their good judgement. But I can see the point, which is what most of us already knew: that experience and intelligence are necessary in this business, but not quite sufficient. And they have some good points to make about something that would probably help:

A major obstacle that we see to achieving greater R&D productivity is the likelihood that many low-viability compounds are knowingly being progressed to advanced phases of development. We estimate that 90% of industry R&D expenditures now go into molecules that never reach the market. In this context, making the right decision on what to progress to late-stage clinical trials is paramount in driving productivity. Indeed, researchers from Pfizer recently published a powerful analysis showing that two-thirds of the company's Phase I assets that were progressed could have been predicted to be likely failures on the basis of available data3. We have seen similar data privately as part of our work with many other companies.

Why are so many such molecules being advanced across the industry? Here, a behavioural perspective could provide insight. There is a strong bias in most R&D organizations to engage in what we call 'progression-seeking' behaviour. Although it is common knowledge that most R&D projects will fail, when we talk to R&D teams in industry, most state that their asset is going to be one of the successes. Positive data tends to go unquestioned, whereas negative data is parsed, re-analysed, and, in many cases, explained away. Anecdotes of successful molecules saved from oblivion often feed this dynamic. Moreover, because it is uncertain which assets will fail, the temptation is to continue working on them. This reaction is not surprising when one considers that personal success for team members is often tied closely to project progression: it can affect job security, influence within the organization and the ability to pursue one's passion. In this organizational context, progression-seeking behaviour is entirely rational.

Indeed it is. The sunk-cost fallacy should also be added in there, the "We've come so far, we can't quit now" thinking that has (in retrospect) led so many people into the tar pit. But they're right, many places end up being built to check the boxes and make the targets, not necessarily to get drugs out the door. If your organization's incentives are misaligned, the result is similar to trying to drive a nail by hitting it from an angle instead of straight on: all that force, being used to mess things up.

Comments (30) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

August 19, 2013

Is The FDA the Problem?

Email This Entry

Posted by Derek

A reader sends along this account of some speakers at last year's investment symposium from Agora Financial. One of the speakers was Juan Enriquez, and I thought that readers here might be interested in his perspective.

First, the facts. According to Enriquez:

Today, it costs 100,000 times less than it once did to create a three-dimensional map of a disease-causing protein

There are about 300 times more of these disease proteins in databases now than in times past

The number of drug-like chemicals per researcher has increased 800 times

The cost to test a drug versus a protein has decreased ten-fold

The technology to conduct these tests has gotten much quicker
Now here’s Enriquez’s simple question:

"Given all these advances, why haven’t we cured cancer yet? Why haven’t we cured Alzheimer’s? Why haven’t we cured Parkinson’s?"

The answer likely lies in the bloated process and downright hostile-to-innovation climate for FDA drug approvals in this day and age...

According to Enriquez, this climate has gotten so bad that major pharmaceuticals companies have begun shifting their primary focus from R&D of new drugs to increased marketing of existing drugs — and mergers and acquisitions.

I have a problem with this point of view, assuming that it's been reported correctly. I'll interpret this as makes-a-good-speech exaggeration, but Enriquez himself has most certainly been around enough to realize that the advances that he speaks of are not, by themselves, enough to lead to a shower of new therapies. That's a theme that has come up on this site several times, as well it might. I continue to think that if you could climb in a time machine and go back to, say, 1980 with these kinds of numbers (genomes sequenced, genes annotated, proteins with solved structures, biochemical pathways identified, etc.), that everyone would assume that we'd be further along, medically, than we really are by now. Surely that sort of detailed knowledge would have solved some of the major problems? More specifically, I become more sure every year that drug discovery groups of that era might be especially taken aback at how the new era of target-based molecular-biology-driven drug research has ended up working out: as a much harder proposition than many might have thought.

So it's a little disturbing to see the line taken above. In effect, it's saying that yes, all these advances have been enough to release a flood of new therapies, which means that there must be something holding them back (in this case, apparently, the FDA). The thing is, the FDA probably has slowed things down - in fact, I'd say it almost certainly has. That's part of their job, insofar as the slowdowns are in the cause of safety.

And now we enter the arguing zone. On the one side, you have the reducio ad absurdum argument that yes, we'd have a lot more things figured out if we could just go directly into humans with our drug candidates instead of into mice, so why don't we just? (That's certainly true, as far as it goes. We would surely kill off a fair number of people doing things that way, as the price of progress, but (more) progress there would almost certainly be. But no one - no one outside of North Korea, anyway - is seriously proposing this style of drug discovery. Someone who agrees with Enriquez's position would regard it as a ridiculous misperception of what they're calling for, designed to make them look stupid and heartless.

But I think that Enriquez's speech, as reported, is the ad absurdum in the other direction. The idea that the FDA is the whole problem is also an oversimplification. In most of these areas, the explosion of knowledge laid out above has not yet let to an explosion of understanding. You'd get the idea that there was this big region of unexplored stuff, and now we've pretty much explored it, so we should really be ready to get things done. But the reality, as I see it, as that there was this big region of unexplored stuff, and we set into to explore it, and found out that it was far bigger than we'd even dreamed. It's easy to get your scale of measurement wrong. It's quite similar to the way that humanity didn't realize just how large the Earth was, then how small it was compared to the solar system (and how off-center), and how non-special our sun was in the immensity of the galaxy, not to mention how many other galaxies there are and how far away they lie. Biology and biochemistry aren't quite on that scale of immensity, but they're plenty big enough.

Now, when I mentioned that we'd surely have killed off more people by doing drug research by the more direct routes, the reply is that we've been killing people off by moving too slowly as well. That's a valid argument. But under the current system, we choose to have people die passively, through mechanisms of disease that are already operating, while under the full-speed-ahead approaches, we might lower that number by instead killing off some others in a more active manner. It's typically human of us to choose the former strategy. The big questions are how many people would die in each category as we moved up and down the range between the two extremes, and what level of each casualty count we'd find "acceptable".

So while it's not crazy to say that we should be less risk-averse, I think it is silly to say that the FDA is the only (or even main) thing holding us back. I think that this has a tendency to bring on both unnecessary anger directed at the agency, and raise unfulfillable hopes in regards to what the industry can do in the near term. Neither of those seem useful to me.

Full disclosure - I've met Enriquez, three years ago at SciFoo. I'd be glad to give him a spot to amplify and extend his remarks if he'd like one.

Comments (40) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Regulatory Affairs

August 15, 2013

Big Pharma And Its Research Publications

Email This Entry

Posted by Derek

A longtime reader sent along this article from the journal Technological Forecasting and Social Change, which I'll freely admit never having spent much time with before. It's from a team of European researchers, and it's titled "Big Pharma, little science? A bibliometric perspective on Big Pharma's R&D decline".

What they've done is examine the publication record for fifteen of the largest drug companies from 1995 to 2009. They start off by going into the reasons why this approach has to be done carefully, since publications from industrial labs are produced (and not produced) for a variety of different reasons. But in the end:

Given all these limitations, we conclude that the analysis of publications does not in itself reflect the dynamics of Big Pharma's R&D. However, at the high level of aggregation we conduct this study (based on about 10,000 publications per year in total, with around 150 to 1500 publications per firm annually) it does raise interesting questions on R&D trends and firm strategies which then can be discussed in light of complementary quantitative evidence such as the trends revealed in studies using a variety of other metrics such as patents and, as well as statements made by firms in statutory filing and reports to investors.

So what did they find? In the 350 most-represented journals, publications from the big companies made up about 4% of the total content over those years (which comes out to over 10,000 papers). But this number has been dropping slightly, but steadily over the period. There are now about 9% few publications from Big Pharma than there were at the beginning of the period. But this effect might largely be explained by mergers and acquisitions over the same period - in every case, the new firm seems to publish fewer papers than the old ones did as a whole.

And here are the subject categories where those papers get published. The green nodes are topics such as pharmacology and molecular biology, and the blue ones are organic chemistry, medicinal chemistry, etc. These account for the bulk of the papers, along with clinical medicine.
Pharma%20global%20map%20of%20science.png
The number of authors per publication has been steadily increasing (in fact, even faster than the other baseline for the journals as a whole), and the organizations-per-paper has been creeping up as well, also slightly faster than the baseline. The authors interpret this as an increase in collaboration in general, and note that it's even more pronounced in areas where Big Pharma's publication rate has grown from a small starting point, which (plausibly) they assign to bringing in outside expertise.

One striking result the paper picks up on is that the European labs have been in decline from a publication standpoint, but this seems to be mostly due to the UK, Switzerland, and France. Germany has held up better. Anyone who's been watching the industry since 1995 can assign names to the companies who have moved and closed certain research sites, which surely accounts for much of this effect. The influence of the US-based labs is clear:

Although in most of this analysis we adopt a Europe versus USA comparative perspective, a more careful analysis of the data reveals that European pharmaceutical companies are still remarkably national (or bi-national as a results of mergers in the case of AstraZeneca and Sanofi-Aventis). Outside their home countries, European firms have more publications from US-based labs than all their non-domestic European labs (i.e. Europe excluding the ‘home country’ of the firm). Such is the extent of the national base for collaborations that when co-authorships are mapped into organisational networks there are striking similarities to the natural geographic distribution of countries. . .with Big Pharma playing a notable role spanning the bibliometric equivalent of the ‘Atlantic’.

Here's one of the main conclusions from the trends the authors have picked up:

The move away from Open Science (sharing of knowledge through scientific conferences and publications) is compatible and consistent with the increasing importance of Open Innovation (increased sharing of knowledge — but not necessarily in the public domain). More specifically, Big Pharma is not merely retreating from publication activities but in doing so it is likely to substitute more general dissemination of research findings in publications for more exclusive direct sharing of knowledge with collaboration partners. Hence, the reduction in publication activities – next to R&D cuts and lab closures – is indicative of a shift in Big Pharma's knowledge sharing and dissemination strategies.

Putting this view in a broader historical perspective, one can interpret the retreat of Big Pharma from Open Science, as the recognition that science (unlike specific technological capabilities) was never a core competence of pharmaceutical firms and that publication activity required a lot of effort, often without generating the sort of value expected by shareholders. When there are alternative ways to share knowledge with partners, e.g. via Open Innovation agreements, these may be attractive. Indeed an associated benefit of this process may be that Big Pharma can shield itself from scrutiny in the public domain by shifting and distributing risk exposure to public research organisations and small biotech firms.

Whether the retreat from R&D and the focus on system integration are a desirable development depends on the belief in the capacities of Big Pharma to coordinate and integrate these activities for the public good. At this stage, one can only speculate. . .

Comments (14) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Industry History | The Scientific Literature

August 14, 2013

A Regeneron Profile

Email This Entry

Posted by Derek

In the spirit of this article about Regeneron, here's a profile in Forbes of the company's George Yancopoulos and Leonard Schleifer. There are several interesting things in there, such as these lessons from Roy Vagelos (when he became Regeneron's chairman after retiring from Merck):

Lesson one: Stop betting on drugs when you won’t have any clues they work until you finish clinical trials. (That ruled out expanding into neuroscience–and is one of the main reasons other companies are abandoning ailments like Alzheimer’s.) Lesson two: Stop focusing only on the early stages of drug discovery and ignoring the later stages of human testing. It’s not enough to get it perfect in a petri dish. Regeneron became focused on mitigating the two reasons that drugs fail: Either the biology of the targeted disease is not understood or the drug does something that isn’t expected and causes side effects.

They're not the only ones thinking this way, of course, but if you're not, you're likely to run into big (and expensive) trouble.

Comments (14) + TrackBacks (0) | Category: Drug Development | Drug Industry History

August 13, 2013

Druggability: A Philosophical Investigation

Email This Entry

Posted by Derek

I had a very interesting email the other day, and my reply to it started getting so long that I thought I'd just turn it into a blog post. Here's the question:

How long can we expect to keep finding new drugs?

By way of analogy, consider software development. In general, it's pretty hard to think of a computer-based task that you couldn't write a program to do, at least in principle. It may be expensive, or may be unreasonably slow, but physical possibility implies that a program exists to accomplish it.

Engineering is similar. If it's physically possible to do something, I can, in principle, build a machine to do it.

But it doesn't seem obvious that the same holds true for drug development. Something being physically possible (removing plaque from arteries, killing all cancerous cells, etc.) doesn't seem like it would guarantee that a drug will exist to accomplish it. No matter how much we'd like a drug for Alzheimer's, it's possible that there simply isn't one.

Is this accurate? Or is the language of chemistry expressive enough that if you can imagine a chemical solution to something, it (in principle) exists. (I don't really have a hard and fast definition of 'drug' here. Obviously all bets are off if your 'drug' is complicated enough to act like a living thing.)

And if it is accurate, what does that say about the long-term prospects for the drug industry? Is there any risk of "running out" of new drugs? Is drug discovery destined to be a stepping-stone until more advanced medical techniques are available?

That's an interesting philosophical point, and one that had never occurred to me in quite that way. I think that's because programming is much more of a branch of mathematics. If you've got a Universal Turing Machine and enough tape to run through it, then you can, in theory, run any program that ever could be run. And any process that can be broken down into handling ones and zeros can be the subject of a program, so the Church-Turing thesis would say that yes, you can calculate it.

But biochemistry is most definitely a different thing, and this is where a lot of people who come into it from the math/CS/engineering side run into trouble. There's a famous (infamous) essay called "Can A Biologist Fix A Radio" that illustrates the point well. The author actually has some good arguments, and some legitimate complaints about the way biochemistry/molecular biology has been approached. But I think that his thesis breaks down eventually, and I've been thinking on and off for years about just where that happens and how to explain what makes things go haywire. My best guess is algorithmic complexity. It's very hard to reduce the behavior of biochemical systems to mathematical formalism. The whole point of formal notation is to express things in the most compact and information-rich way possible, but trying to compress biochemistry in this manner doesn't give you much of an advantage, at least not in the ways we've tried to do it so far.

To get back to the question at hand, let's get philosophical. I'd say that at the most macro level, there are solutions to all the medical problems. After all, we have the example of people who don't have multiple sclerosis, who don't have malaria, who don't have diabetes or pancreatic cancer or what have you. We know that there are biochemical states where these things do not exist; the problem is then to get an individual patient's state back to that situation. Note that this argument does not apply to things like life extension, limb regeneration, and so on: we don't know if humans are capable of these things or not yet, even if there may be some good arguments to be made in their favor. But we know that there are human brains without Alzheimer's.

To move down a level from this, though, the next question is whether there are ways to put a patient's cells and organs back into a disease-free state. In some cases, I think that the answer has to be, for all practical purposes, "No". I tend to think that the later stages of Alzheimer's (for example) are in fact incurable. Neurons are dead and damaged, what was contained in them and in their arrangement is gone, and any repair system can only go so far. Too much information has been lost and too much entropy has been let in. I would like to be wrong about this, but I don't think I am.

But for less severe states and diseases, you can imagine various interventions - chemical, surgical, genetic - that could restore things. So the question here becomes whether there are drug-like solutions. The answer is tricky. If you look at a biochemical mechanism and can see that there's a particular pathway involving small molecules, then certainly, you can say that there could be a molecule to be found as a treatment, even if we haven't found it yet. But the first part of that last sentence has to be unpacked.

Take diabetes. Type I diabetes is proximately caused by lack of insulin, so the solution is to take insulin. And that works, although it's certainly not a cure, since you have to take insuin for the rest of your life, and it's impossible to take it in a way that perfectly mimics the way your body would adminster it, etc. A cure would be to have working beta-cells again that respond just the way they're supposed to, and that's less likely to be achieved through a drug therapy. (Although you could imagine some small molecule that affects a certain class of stem cell, causing it to start the program to differentiate into a fully-formed beta cell, and so on). You'd also want to know why the original population of cells died in the first place, and how to keep that from happening again, which might also take you to some immunological and cell-cycle pathways that could be modulated by drug molecules. But all of these avenues might just as easily take you into genetically modified cloned cell lines and surgical implantation, too, rather than anything involving small-molecule chemistry.

Here's another level of complexity, then: insulin is certainly a drug, but it's not a small molecule of the kind I'd be making. Is there a small molecular that can replace it? You'd do very well with that indeed, but the answer (I think) is "probably not". If you look at the receptor proteins that insulin binds to, the recognition surfaces that are used are probably larger than small molecules can mimic. No one's ever found a small molecule insulin mimetic, and I don't think anyone is likely to. (On the other hand, if you're trying to disrupt a protein-protein interaction, you have more hope, although that's still an extremely difficult target. We can disrupt things a lot more easily than we can make them work). Even if you found a small-molecule-insulin, you'd be faced with the problem of dosing it appropriately, which is no small challenge for a tightly and continuously regulated system like that one. (It's no small challenge for administering insulin itself, either).

And even for mechanisms that do involve small-molecule signaling, like the G-protein coupled receptors, there are still things to worry about. Take schizophrenia. You can definitely see problems with neural systems in the brain when you study that disease, and these neurons respond to, among other things, small-molecue neurotransmitters that the body makes and uses itself - dopamine, serotonin, acetylcholine and others. There are a certain number of receptors for each of those, and although we don't have all the combinations yet, I could imagine, on a philosophical level, that we could eventually have selective drugs that are agonists, antagonists, partial agonists, inverse agonists, what have you at all the subtypes. We have quite a few of them now, for some of the families. And I can even imagine that we could eventually have most or all of the combinations: a molecule that's a dopamine D2 agonist and a muscarinic M4 antagonist, all in one, and so on and so on. That's a lot more of a stretch, to be honest, but I'll stipulate that it's possible.

So you have them all. Now, which ones do you give to help a schizophrenic? We don't know. We have guesses and theories, but most of them are surely wrong. Every biochemical theory about schizophrenia is either wrong or incomplete. We don't know what goes wrong, or why, or how, or what might be done to bend things back in the right direction. It might be that we're in the same area as Alzheimer's: perhaps once a person's brain has developed in such a way that it slips into schizophrenia, that there is no way at all to rewire things, in the same way that we can't ungrow a tree in order to change the shape of its canopy. I've no idea, and we're going to know a lot more about the brain by the time we can answer that one.

So one problem with answering this question is that it's bounded not so much by chemistry as by biology. Lots and lots of biology, most of it unknown. But thinking in terms of sheer chemistry is interesting, too. Consider "The Library of Babel", the famous story by Jorge Luis Borges. It takes place in some sort of universe that is no more (and no less) than a vast library containing every possible book that can be be produced with a 25-character set of letters and punctuation marks. This is, as a bit of reflection will show, a very, very large number, one large enough to contain everything that can possibly be written down. And all the slight variations. And all the misprints. And all the scrambled coded versions of everything, and so on and so on. (W. v. O. Quine extended this idea to binary coding, which brings you back to computability).

Now think about the universe of drug-like molecules. It is also very large, although it is absolutely insignificant compared to the terrifying Library of Babel. (It's worth noting that the Library contains all of the molecules that can ever exist, coded in SMILES strings - that thought just occurred to me at this very moment, and gives me the shivers). The universe of proteins works that way, too - an alphabet of twenty-odd letters for amino acids gives you the exact same situation as the Library, and if you imagine some hideous notation for coding in all the folding variants and post-translational modifications, all the proteins are written down as well.

These, then, encompass everything chemical compound up to some arbitrary size, and the original question is, is this enough? Are there questions for which none of these words are the answer? That takes you into even colder and deeper philosophical waters. Wittgenstein (among many others) wondered the same thing about our own human languages, and seems to have decided that there are indeed things that cannot be expressed, and that this marks the boundary of philosophy itself. Famously, his Tractacus ends with the line "Wovon man nicht sprechen kann, darüber muss man schweigen": whereof we cannot speak, we must pass over in silence.

We're not at that point in the language of chemistry and pharmacology yet, and it's going to be a long, long time before we ever might be. Just the fact, though, that computability seems like such a more reasonable proposition in computer science than druggability does in biochemistry tells you a great deal about how different the two fields are.

Update: On the subject of computabiity, I'm not sure how I missed the chance to bring Gödel's Incompleteness Theorem into this, just to make it a complete stewpot of math and philosophy. But the comments to this post point out that even if you can write a program, you cannot be sure whether it will ever finish the calculation. This Halting Problem is one of the first things ever to be proved formally undecidable, and the issues it raises are very close to those explored by Gödel. But as I understand it, this is decidable for a machine with a finite amount of memory, running a deterministic program. The problem is, though, that it still might take longer than the expected lifetime of the universe to "halt", which leaves you, for, uh, practical purposes, in pretty much the same place as before. This is getting pretty far afield from questions of druggability, though. I think.

Comments (40) + TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico

August 12, 2013

How Much to Develop a Drug? An Update.

Email This Entry

Posted by Derek

I've referenced this Matthew Herper piece on the cost of drug development several times over the last few years. It's the one where he totaled up pharma company R&D expenditures (from their own financial statements) and then just divided that by the number of drugs produced. Crude, but effective - and what it said was that some companies were spending ridiculous, unsustainable amounts of money for what they were getting back.

Now he's updated his analysis, looking at a much longer list of companies (98 of them!) over the past ten years. Here's the list, in a separate post. Abbott is at the top, but that's misleading, since they spent R&D money on medical devices and the like, whose approvals don't show up in the denominator.

But that's not the case for #2, Sanofi: 6 drugs approved during that time span, at a cost, on their books of ten billion dollars per drug. Then you have (as some of you will have guessed) AstraZeneca - four drugs at 9.5 billion per. Roche, Pfizer, Wyeth, Lilly, Bayer, Novartis and Takeda round out the top ten, and even by that point we're still looking at six billion a whack. One large company that stand out, though, is Bristol-Myers Squibb, coming in at #22, 3.3 billion per drug. The bottom part of the list is mostly smaller companies, often with one approval in the past ten years, and that one done reasonably cheaply. But three others that stand out as having spent significant amounts of money, while getting something back for it, are Genzyme, Shire, and Regeneron. Genzyme, of course, has now been subsumed in that blazing bonfire of R&D cash known as Sanofi, so that takes care of that.

Sixty-six of the 98 companies studied launched only one drug this decade. The costs borne by these companies can be taken as a rough estimate of what it takes to develop a single drug. The median cost per drug for these singletons was $350 million. But for companies that approve more drugs, the cost per drug goes up – way up – until it hits $5.5 billion for companies that have brought to market between eight and 13 medicines over a decade.

And he's right on target with the reason why: the one-approval companies on the list were, for the most part, lucky the first time out. They don't have failures on their books yet. But the larger organizations have had plenty of those to go along with the occasional successes. You can look at this situation more than one way - if the single-drug companies are an indicator of what it costs to get one drug discovered and approved, then the median figure is about $350 million. But keep in mind that these smaller companies can tend to go after a different subset of potential drugs. They're a bit more likely to pick things with a shorter, more defined clinical path, even if there isn't as big a market at the end, in order to have a better story for their investors.

Looking at what a single successful drug costs, though, isn't a very good way to prepare for running a drug company. Remember, the only small companies on this list are the ones that have suceeded, and many, many more of them spent all their money on their one shot and didn't make it. That's what's reflected in the dollars-per-drug figures for the larger organizations, that and the various penalties for being a huge organization. As Herper says:

Size has a cost. The data support the idea that large companies may be spend more per drug than small ones. Companies that spent more than $20 billion in R&D over the decade spent $6.3 billion per new drug, compared to $2.8 billion for those that had budgets of between $5 billion and $10 billion. Some CEOs, notably Christopher Viehbacher at Sanofi, have faced low R&D productivity in part by cutting the budget. This may make sense in light of this data. But it is worth noting that the bigger firms brought twice as many drugs to market. It still could be that the difference between these two groups is due to smaller companies not bearing the full financial weight of the risk of failure.

There are other factors that kick these numbers around a bit. As Herper points out, there's a tax advantage for R&D expenditures, so there's no incentive to under-report them (but there's also an IRS to keep you from going wild over-reporting them, too). And some of the small companies on the list picked up their successes by taking on failed programs from larger outfits, letting them spend a chunk of R&D cash on the drugs beforehand. But overall, the picture is just about as grim as you'd have figured, if not a good deal more so. Our best hope is that this is a snapshot of the past, and not a look into the future. Because we can't go on like this.

Comments (33) + TrackBacks (0) | Category: Drug Development | Drug Industry History

August 7, 2013

Reworking Big Pharma

Email This Entry

Posted by Derek

Bruce Booth (of Atlas Venture Capital) has a provocative post up at Forbes on what he would do if he were the R&D head of a big drug company. He runs up his flag pretty quickly:

I don’t believe that we will cure the Pharma industry of its productivity ills through smarter “operational excellence” approaches. Tweaking the stage gates, subtly changing attrition curves, prioritizing projects more effectively, reinvigorating phenotypic screens, doing more of X and less of Y – these are all fine and good, and important levers, but they don’t hit the key issue – which is the ossified, risk-avoiding, “analysis-paralysis” culture of the modern Pharma R&D organization.

He notes that the big companies have all been experimenting with ways to get more new thinking and innovation into their R&D (alliances with academia, moving people to the magic environs of Cambridge (US or UK), and so on). But he's pretty skeptical about any of this working, because all of this tends to take place out on the edges. And what's in the middle? The big corporate campus, which he says "has become necrotic in many companies". What to do with it? He has several suggestions, but here's a big one. Instead of spending five or ten per cent of the R&D budget on out-there collaborations, why not, he says, go for broke:

Taken further, bringing the periphery right into the core is worth considering. This is just a thought experiment, and certainly difficult to do in practice, but imagine turning a 5000-person R&D campus into a vibrant biotech park. Disaggregate the research portfolio to create a couple dozen therapeutically-focused “biotech” firms, with their own CEOs, responsible for a 3-5 year plan and with a budget that maps to that plan. Each could have its own Board and internal/external advisors, and flexibility to engage free market service providers outside the biotech park. Invite new venture-backed biotechs and CROs to move into the newly rebranded biotech park, incentivized with free lab space, discounted leases, access to subsidized research capabilities, or even unencumbered matching grants. Put some of the new spin-outs from their direct academic initiatives into the mix. But don’t put strings on those new externally-derived companies like the typical Pharma incubator; these will constrain the growth of these new companies. Focus this big initiative on one simple benefit: strategic proximity to a different culture.

His second big recommendation is "Get the rest of the company out of research's way". And by that, he especially means the commercial part of the organization:

One immediate solution would be to kick Commercial input out of decision-making in Research. Or, more practically, at least reduce it dramatically. Let them know that Research will hand them high quality post-PoC Phase 3-ready programs addressing important medical needs. Remove the market research gates and project NPV assessment models from critical decision-making points. Ignore the commercially-defined “in” vs “out” disease states that limit Research teams’ degrees of freedom. Let the science and medicine guide early program identification and progress. . .If you don’t trust the intellect of your Research leaders, then replace them. But second-guessing, micro-managing, and over-analyzing doesn’t aid in the exploration of innovation.

His last suggestion is to shake up the Board of Directors, and whatever Scientific Advisory Board the company has:

Too often Pharma defaults to not engaging the outside because “they know their programs best” or for fear of sharing confidential information that might leak to its competition. Reality is the latter is the least of their worries, and I’ve yet to hear this as being a source of profound competitive intelligence leakage. A far worse outcome is unchallenged “group think” about the merits (or demerits) of a program and its development strategy. Importantly, I’m not talking about specific Key Opinion Leader engagement on projects, as most Pharma companies do this effectively already. I’m referring to a senior, strategic, experienced advisory function from true practitioners in the field to help the R&D leadership team get a fresh perspective.

This is part of the "get some outside thinking" that is the thrust of his whole article. I can certainly see where he's coming from, and I think that this sort of thing might be exactly what some companies need. But what are the odds of (a) their realizing that and (b) anything substantial being done about it? I'm not all that optimistic - and, to be sure, Booth's article also mentions that some of these ideas might well be unworkable in practice.

I think that's because there's another effect that all of Bruce's recommendations have: they decrease the power and influence of upper management. Break up your R&D department, let in outside thinking, get your people to strike out pursuing their own ideas. . .all of those cut into the duties of Senior Executive Vice Presidents of Strategic Portfolio Planning, you know. Those are the sorts of people who will have to sign off on such changes, or who will have a chance to block them or slow their implementation. You'll have to sneak up on them, and there might not be enough time to do that in some of the more critical cases.

Another problem is what the investors would do if you tried some of the more radical ideas. As the last part of the post points out, we have a real problem in this business with our relationship with Wall Street. The sorts of people who want quarter-by-quarter earnings forecasts would absolutely freak if you told them that you were tearing the company up into a pile of biotechs. (And by that, I mean tearing it up for real, not created centers-of-innovation-excellence or whatever the latest re-org chart might call it). It's hard to think of a good way out of that one, too, for a large public company.

Now, there are people out there who have enough nerve and enough vision to try some things in this line, and once in a while you see it happen. But inertial forces are very strong indeed. With some organizations, it might be less work to just start over, rather than to spend all that effort tearing down the things you want to get rid of. For all I know, this is what (say) AstraZeneca has in mind with its shakeup and moving everyone to Cambridge. But what systems and attitudes are going to be packed up and moved over along with all the boxes of lab equipment?

Comments (39) + TrackBacks (0) | Category: Drug Industry History | Who Discovers and Why

July 19, 2013

Good Advice: Get Lost!

Email This Entry

Posted by Derek

I thought everyone could use something inspirational after the sorts of stories that have been in the news the last few days. Here's a piece at FierceBiotech on Regeneron, a company that's actually doing very well and expanding. And how have they done it?

Regeneron CEO Dr. Leonard "Len" Schleifer, who founded the company in 1988, says he takes pride in the fact that his team is known for doing "zero" acquisitions. All 11 drugs in the company's clinical-stage pipeline stem from in-house discoveries. He prefers a science-first approach to running a biotech company, hiring Yancopoulos to run R&D in 1989, and he endorsed a 2012 pay package for the chief scientist that was more than twice the size of his own compensation last year.

Scientists run Regeneron. Like Yancopoulos, Schleifer is an Ivy League academic scientist turned biotech executive. Regeneron gained early scientific credibility with a 1990 paper in the journal Science on cloning neurotrophin factor, a research area that was part of a partnership with industry giant Amgen. Schleifer has recruited three Nobel Prize-winning scientists to the board of directors, which is led by long-time company Chairman Dr. P. Roy Vagelos, who had a hand in discovering the first statin and delivering a breakthrough treatment for a parasitic cause of blindness to patients in Africa.

"I remember these people from Pfizer used to go around telling us, 'You know, blockbusters aren't discovered, they're made,' as though commercial people made the blockbuster," Schleifer said in an interview. "Well, get lost. Science, science, science--that's what this business is about."

I don't know about you, but that cheers me up. That kind of attitude always does!

Comments (10) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

June 18, 2013

Bernard Munos on The Last Twelve Years of Pharma

Email This Entry

Posted by Derek

Bernard Munos (ex-Lilly, now consulting) is out with a paper reviewing the approved drugs from 2000 to 2012. What's the current state of the industry? Is the upturn in drug approvals over the last two years real, or an artifact? And is it enough to keep things going?

Over that twelve-year span, the average drug approvals ran at 27 per year. Half of all the new drugs were in three therapeutic areas: cancer, infectious disease, and CNS. And as far as mechanisms go, there were about 190 different ones, by Munos' count. The most crowded category was (as might have been guessed) the 17 tyrosine kinase inhibitors, but 85% of the mechanisms were used by only one or two drugs, which is a long tail indeed.

Half those mechanisms were novel - that is, they were not represented by drugs approved before 2000. Coming up behind these first-in-class mechanisms were 29 follow-on drugs during this period, with an average gap of just under three years between the first and second drugs. What that tells you is that the follower programs were started at either about the same time as the first-in-class compounds (and had a slightly longer path through development), or were started at the first opportunity once the other program or mechanism became known. This means that they were started on very nearly the same risk basis as the original program: a three-year gap is not enough to validate much for a new mechanism, other than the fact that another organization thinks that it's worth working on, too. (Don't laugh at that one - there are research department that seem to live only for this validation, and regard their own first-in-class ideas with fear and suspicion).

Overall, though, Munos says that that fast-follower approach doesn't seem to be very effective, or not any more, given that few targets seem to be yielding more than one or two drugs. And as just mentioned, the narrow gap between first and second drugs also suggests that the risk-lowering effect of this strategy isn't very impressive, either.

Here's another interesting/worrisome point:

The long tail (of the mode-of-action curve). . . suggests that pharmaceutical innovation is a by-product of exploration, and not the result of pursuing a limited set of mechanisms, reflecting, for instance, a company’s marketing priorities. Put differently, there does not seem to be enough mechanisms able to yield multiple drugs, to support an industry. . .The last couple of years have seen an encouraging rise in new drug approvals, including many based on novel modes of action. However that surge has benefited companies unequally, with the top 12 pharmaceutical companies only garnering 25 out of 68 NMEs (37%). This is not enough to secure their future.

Looking at what many (most?) of the big companies are going through right now, it's hard to argue with that point of view. The word "secure" does not appear within any short character length of "future" when you look through the prospects for Lilly, AstraZeneca, and others.

Note also that part about how what a drug R&D operation finds isn't necessarily what it was looking for. That doesn't mesh well with some models of managment:

The drug hunter’s freedom to roam, and find innovative translational opportunities wherever they may lie is an essential part of success in drug research. This may help explain the disappointing performance of the programmatic approaches to drug R&D, that have swept much of the industry in the last 15 years. It has important managerial implications because, if innovation cannot be ordained, pharmaceutical companies need an adaptive – not directive – business model.

But if innovation cannot be ordained, why does a company need lots of people in high positions to ordain it, each with his or her own weekly meeting and online presentations database for all the PowerPoint slides? It's a head-scratcher of a problem, isn't it?

Comments (29) + TrackBacks (0) | Category: Drug Development | Drug Industry History

June 14, 2013

One. . .Million. . .Pounds (For a New Antibiotic?)

Email This Entry

Posted by Derek

Via Stuart Cantrill on Twitter, I see that UK Prime Minister David Cameron is prepared to announce a prize for anyone who can "identify and solve the biggest problem of our time". He's leaving that open, and his examples are apparently ". . .the next penicillin, aeroplane or world wide web".

I like the idea of prizes for research and invention. The thing is, the person who invents the next airplane or World Wide Web will probably do pretty well off it through the normal mechanisms. And it's worth thinking about the very, very different pathways these three inventions took, both in their discovery and their development. While thinking about that, keep in mind the difference between those two.

The Wright's first powered airplane, a huge step in human technology, was good for carrying one person (lying prone) for a few hundred yards in a good wind. Tim Berners-Lee's first Web page, another huge step, was a brief bit of code on one server at CERN, and mostly told people about itself. Penicillin, in its early days, was famously so rare that the urine of the earliest patients was collected and extracted in order not to waste any of the excreted drug. And even that was a long way from Fleming's keen-eyed discovery of the mold's antibacterial activity. A more vivid example than penicillin of the need for huge amounts of development from an early discovery is hard to find.

And how does one assign credit to the winner? Many (most) of these discoveries take a lot of people to realize them - certainly, by the time it's clear that they're great discoveries. Alexander Fleming (very properly) gets a lot of credit for the initial discovery of penicillin, but if the world had depended on him for its supply, it would have been very much out of luck. He had a very hard time getting anything going for nearly ten years after the initial discovery, and not for lack of trying. The phrase "Without Fleming, no Chain; without Chain, no Florey; without Florey, no Heatley; without Heatley, no penicillin" properly assigns credit to a lot of scientists that most people have never heard of.

Those are all points worth thinking about, if you're thinking about Cameron's prize, or if you're David Cameron. But that's not all. Here's the real kicker: he's offering one million pounds for it ($1.56 million as of this morning). This is delusional. The number of great discoveries that can be achieved for that sort of money is, I hate to say, rather small these days. A theoretical result in math or physics might certainly be accomplished in that range, but reducing it to practice is something else entirely. I can speak to the "next penicillin" part of the example, and I can say (without fear of contradiction from anyone who knows the tiniest bit about the subject) that a million pounds could not, under any circumstances, tell you if you had the next penicillin. That's off by a factor of a hundred, if you just want to take something as far as a solid start.

There's another problem with this amount: in general, anything that's worth that much is actually worth a lot more; there's no such thing as a great, world-altering discovery that's worth only a million pounds. I fear that this will be an ornament around the neck of whoever wins it, and little more. If Cameron's committee wants to really offer a prize in line with the worth of such a discovery, they should crank things up to a few hundred million pounds - at least - and see what happens. As it stands, the current idea is like me offering a twenty-dollar bill to anyone who brings me a bar of gold.

Comments (28) + TrackBacks (0) | Category: Current Events | Drug Industry History | Infectious Diseases | Who Discovers and Why

May 28, 2013

Valeant Versus Genentech: Two Different Worlds

Email This Entry

Posted by Derek

Readers may recall the bracing worldview of Valeant CEO Mike Pearson. Here's another dose of it, courtesy of the Globe and Mail. Pearson, when he was brought in from McKinsey, knew just what he wanted to do:

Pearson’s next suggestion was even more daring: Cut research and development spending, the heart of most drug firms, to the bone. “We had a premise that most R&D didn’t give good return to shareholders,” says Pearson. Instead, the company should favour M&A over R&D, buying established treatments that made enough money to matter, but not enough to attract the interest of Big Pharma or generic drug makers. A drug that sold between $10 million and $200 million a year was ideal, and there were a lot of companies working in that range that Valeant could buy, slashing costs with every purchase. As for those promising drugs it had in development, Pearson said, Valeant should strike partnerships with major drug companies that would take them to market, paying Valeant royalties and fees.

It's not a bad strategy for a company that size, and it sure has worked out well for Valeant. But what if everyone tried to do the same thing? Who would actually discover those drugs for inlicensing? That's what David Shayvitz is wondering at Forbes. He contrasts the Valeant approach with what Art Levinson cultivated at Genentech:

While the industry has moved in this direction, it’s generally been slower and less dramatic than some had expected. In part, many companies may harbor unrealistic faith in their internal R&D programs. At the same time, I’ve heard some consultants cynically suggest that to the extent Big Pharma has any good will left, it’s due to its positioning as a science-driven enterprise. If research was slashed as dramatically as at Valeant, the industry’s optics would look even worse. (There’s also the non-trivial concern that if Valeant’s acquisition strategy were widely adopted, who would build the companies everyone intends to acquire?)

The contrasts between Levinson’s research nirvana and Pearson’s consultant nirvana (and scientific dystopia) could hardly be more striking, and frame two very different routes the industry could take. . .

I can't imagine the industry going all one way or all the other. There will always be people who hope that their great new ideas will make them (and their investors) rich. And as I mentioned in that link in the first paragraph, there's been talk for years about bigger companies going "virtual", and just handling the sales and regulatory parts, while licensing in all the rest. I've never been able to quite see that, either, because if one or more big outfits tried it, the cost of such deals would go straight up - wouldn't they? And as they did, the number would stop adding up. If everyone knows that you have to make deals or die, well, the price of deals has to increase.

But the case of Valeant is an interesting and disturbing one. Just think over that phrase, ". . .most R&D didn't give good return to the shareholders". You know, it probably hasn't. Some years ago, the Wall Street Journal estimated that the entire biotech industry, taken top to bottom across its history, had yet to show an actual profit. The Genentechs and Amgens were cancelled out, and more, by all the money that had flowed in never to be seen again. I would not be surprised if that were still the case.

So, to steal a line from Oscar Wilde (who was no stranger to that technique), is an R&D-driven startup the triumph of hope over experience? Small startups are the very definition of trying to live off returns of R&D, and most startups fail. The problem is, of course, that any Valeants out there need someone to do the risky research for there to be something for them to buy. An industry full of Mike Pearsons would be a room full of people all staring at each other in mounting perplexity and dismay.

Comments (32) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

May 20, 2013

But Don't Drug Companies Spend More on Marketing?

Email This Entry

Posted by Derek

So drug companies may spend a lot on R&D, but they spend even more on marketing, right? I see the comments are already coming in to that effect on this morning's post on R&D expenditures as a percentage of revenues. Let's take a look at those other numbers, then.

We're talking SG&A, "sales, general, and administrative". That's the accounting category where all advertising, promotion and marketing ends up. Executive salaries go there, too, in case you're wondering. Interestingly, R&D expenses technically go there as well, but companies almost always break that out as a separate subcategory, with the rest as "Other SG&A". What most companies don't do is break out the S part separately: just how much they spend on marketing (and how, and where) is considering more information than they're willing to share with the world, and with their competition.

That means that when you see people talking about how Big Pharma spends X zillion dollars on marketing, you're almost certainly seeing an argument based on the whole SG&A number. Anything past that is a guess - and would turn out to be a lower number than the SG&A, anyway, which has some other stuff rolled into it. Most of the people who talk about Pharma's marketing expenditures are not interested in lower numbers, anyway, from what I can see.

So we'll use SG&A, because that's what we've got. Now, one of the things you find out quickly when you look at such figures is that they vary a lot, from industry to industry, and from company to company inside any given group. This is fertile ground for consultants, who go around telling companies that if they'll just hire them, they can tell them how to get their expenses down to what some of their competition can, which is an appealing prospect.
SG%26A.png
Here you see an illustration of that, taken from the web site of this consulting firm. Unfortunately, this sample doesn't include the "Pharmaceuticals" category, but "Biotechnology" is there, and you can see that SG&A as a percent of revenues run from about 20% to about 35%. That's definitely not one of the low SG&A industries (look at the airlines, for example), but there are a lot of other companies, in a lot of other industries, in that same range.

So, what do the SG&A expenditures look like for some big drug companies? By looking at 2012 financials, we find that Merck's are at 27% of revenues, Pfizer is at 33%, AstraZeneca is just over 31%, Bristol-Myers Squibb is at 28%, and Novartis is at 34% high enough that they're making special efforts to talk about bringing it down. Biogen's SG&A expenditures are 23% of revenues, Vertex's are 29%, Celgene's are 27%, and so on. I think that's a reasonable sample, and it's right in line with that chart's depiction of biotech.

What about other high-tech companies? I spent some time in the earlier post talking about their R&D spending, so here are some SG&A figures. Microsoft spends 25%, Google just under 20%, and IBM spends 21.5%. Amazon's expenditures are about 23%, and have been climbing. But many other tech companies come in lower: Hewlett-Packard's SG&A layouts are 11% of revenues, Intel's are 15%, Broadcom's are 9%, and Apple's are only 6.5%.

Now that's more like it, I can hear some people saying. "Why can't the drug companies get their marketing and administrative costs down? And besides, they spend more on that than they do on research!" If I had a dollar for every time that last phrase pops up, I could take the rest of the year off. So let's get down to what people are really interested in: sales/administrative costs versus R&D. Here comes a list (and note that some of the figures may be slightly off this morning's post - different financial sites break things down slightly differently):

Merck: SG&A 27%, R&D 17.3%
Pfizer: SG&A 33%, R&D 14.2%
AstraZeneca: SG&A 31.4%, R&D 15.1%
BMS: SG&A 28%, R$D 22%
Biogen: SG&A 23%, R&D 24%
Johnson & Johnson: SG&A 31%, R&D 12.5%

Well, now, isn't that enough? As you go to smaller companies, it looks better (and in fact, the categories flip around) but when you get too small, there aren't any revenues to measure against. But jut look at these people - almost all of them are spending more on sales and administration than they are on research, sometimes even a bit more than twice as much! Could any research-based company hold its head up with such figures to show?

Sure they could. Sit back and enjoy these numbers, by comparison:

Hewlett-Packard: SG&A 11%, R&D 2.6%.
IBM: SG&A 21.5%, R&D 5.7%.
Microsoft: SG&A 25%, R&D 13.3%.
3M: SG&A 20.4%, R&D 5.5%
Apple: SG&A 6.5%, R&D 2.2%.
GE: SG&A 25%, R&D 3.2%

Note that these companies, all of whom appear regularly on "Most Innovative" lists, spend anywhere from two to eight times their R&D budgets on sales and administration. I have yet to hear complaints about how this makes all their research into some sort of lie, or about how much more they could be doing if they weren't spending all that money on those non-reseach activities. You cannot find a drug company with a split between SG&A and research spending like there is for IBM, or GE, or 3M. I've tried. No research-driven drug company could survive if it tried to spend five or six times its R&D on things like sales and administration. It can't be done. So enough, already.

Note: the semiconductor companies, which were the only ones I could find with comparable R&D spending percentages to the drug industry, are also outliers in SG&A spending. Even Intel, the big dog of the sector, manages to spend slightly less on that category than it does on R&D, which is quite an accomplishment. The chipmakers really are off on their own planet, financially. But the closest things to them are the biopharma companies, in both departments.

Comments (27) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

How Much Do Drug Companies Spend on R&D, Anyway?

Email This Entry

Posted by Derek

How much does Big Pharma spend on R&D, compared to what it takes in? This topic came up during a discussion here last week, when a recent article at The Atlantic referred to these expenditures as "only" 16 cents on the dollar, and I wanted to return to it.

One good source for such numbers is Booz, the huge consulting outfit, and their annual "Global Innovation 1000" survey. This is meant to be a comparison of companies that are actually trying to discover new products and bring them to market (as opposed to department stores, manufacturers of house-brand cat food, and other businesses whose operations consist of doing pretty much the same thing without much of an R&D budget). Even among these 1000 companies, the average R&D budget, as a per cent of sales, is between 1 and 1.5%, and has stayed in that range for years.

Different industries naturally have different averages. The "chemicals and energy" category in the Booz survey spends between 1 and 3% of its sales on R&D. Aerospace and defense companies tend to spend between 3 and 6 per cent. The big auto makers tend to spend between 3 and 7% of their sales on research, but those sales figures are so large that they still account for a reasonable hunk (16%) of all R&D expenditures. That pie, though, has two very large slices representing electronics/computers/semiconductors and biopharma/medical devices/diagnostics. Those two groups account for half of all the industrial R&D spending in the world.

And there are a lot of variations inside those industries as well. Apple, for example, spends only 2.2% of its sales on R&D, while Samsung and IBM come in around 6%. By comparison with another flagship high-tech sector, the internet-based companies, Amazon spends just over 6% itself, and Google is at a robust 13.6% of its sales. Microsoft is at 13% itself.

The semiconductor companies are where the money really gets plowed back into the labs, though. Here's a roundup of 2011 spending, where you can see a company like Intel, with forty billion dollars of sales, still putting 17% of that back into R&D. And the smaller firms are (as you might expect) doing even more. AMD spends 22% of its sales on R&D, and Broadcom spends 28%. These are people who, like Alice's Red Queen, have to run as fast as they can if they even want to stay in the same place.

Now we come to the drug industry. The first thing to note is that some of its biggest companies already have their spending set at Intel levels or above: Roche is over 19%, Merck is over 17%, and AstraZeneca is over 16%. The others are no slouches, either: Sanofi and GSK are above 14%, and Pfizer (with the biggest R&D spending drop of all the big pharma outfits, I should add) is at 13.5%. They, J&J, and Abbott drag the average down by only spending in the 11-to-14% range - I don't think that there's such a thing as a drug discovery company that spends in the single digits compared to revenue. If any of us tried to get away with Apple's R&D spending levels, we'd be eaten alive.

All this adds up to a lot: if you take the top 20 biggest industrial R&D spenders in the world, eight of them are drug companies. No other industrial sector has that many on the list, and a number of companies just missed making it. Lilly, for one, spent 23% of revenues on R&D, and BMS spend 22%, as did Biogen.

And those are the big companies. As with the chip makers, the smaller outfits have to push harder. Where I work, we spent about 50% of our revenues on R&D last year, and that's projected to go up. I think you'll find similar figures throughout biopharma. So you can see why I find it sort of puzzling that someone can complain about the drug industry as a whole "only" spending 16% of its revenues. Outside of semiconductors, nobody spends more

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

May 15, 2013

And The Award For Clinical Futility Goes To. . .

Email This Entry

Posted by Derek

I was talking with someone the other day about the most difficult targets and therapeutic areas we knew, and that brought up the question: which of these has had the greatest number of clinical failures? Sepsis was my nomination: I know that there have been several attempts, all of which have been complete washouts. And for mechanisms, defined broadly, I nominate PPAR ligands. The only ones to make it through were the earliest compounds, discovered even before their target had been identified. What other nominations do you have?

Comments (32) + TrackBacks (0) | Category: Clinical Trials | Drug Industry History

April 23, 2013

IBM And The Limits of Transferable Tech Expertise

Email This Entry

Posted by Derek

Here's a fine piece from Matthew Herper over at Forbes on an IBM/Roche collaboration in gene sequencing. IBM had an interesting technology platform in the area, which they modestly called the "DNA transistor". For a while, it was going to the the Next Big Thing in the field (and the material at that last link was apparently written during that period). But sequencing is a very competitive area, with a lot of action in it these days, and, well. . .things haven't worked out.

Today Roche announced that they're pulling out of the collaboration, and Herper has some thoughts about what that tells us. His thoughts on the sequencing business are well worth a look, but I was particularly struck by this one:

Biotech is not tech. You’d think that when a company like IBM moves into a new field in biology, its fast technical expertise and innovativeness would give it an advantage. Sometimes, maybe, it does: with its supercomputer Watson, IBM actually does seem to be developing a technology that could change the way medicine is practiced, someday. But more often than not the opposite is true. Tech companies like IBM, Microsoft, and Google actually have dismal records of moving into medicine. Biology is simply not like semiconductors or software engineering, even when it involves semiconductors or software engineering.

And I'm not sure how much of the Watson business is hype, either, when it comes to biomedicine (a nonzero amount, at any rate). But Herper's point is an important one, and it's one that's been discussed many time on this site as well. This post is a good catch-all for them - it links back to the locus classicus of such thinking, the famous "Can A Biologist Fix a Radio?" article, as well as to more recent forays like Andy Grove (ex-Intel) and his call for drug discovery to be more like chip design. (Here's another post on these points).

One of the big mistakes that people make is in thinking that "technology" is a single category of transferrable expertise. That's closely tied to another big (and common) mistake, that of thinking that the progress in computing power and electronics in general is the way that all technological progress works. (That, to me, sums up my problems with Ray Kurzweil). The evolution of microprocessing has indeed been amazing. Every field that can be improved by having more and faster computational power has been touched by it, and will continue to be. But if computation is not your rate-limiting step, then there's a limit to how much work Moore's Law can do for you.

And computational power is not the rate-limiting step in drug discovery or in biomedical research in general. We do not have polynomial-time algorithms to predictive toxicology, or to models of human drug efficacy. We hardly have any algorithms at all. Anyone who feels like remedying this lack (and making a few billion dollars doing so) is welcome to step right up.

Note: it's been pointed out in the comments that cost-per-base of DNA sequencing has been dropping at an even faster than Moore's Law rate. So there is technological innovation going on in the biomedical field, outside of sheer computational power, but I'd still say that understanding is the real rate limiter. . .

Comments (17) + TrackBacks (0) | Category: Analytical Chemistry | Biological News | Drug Industry History

April 3, 2013

AstraZeneca's Move To Hot, Happening Cambridge

Email This Entry

Posted by Derek

If you're looking for a sunny, optimistic take on AstraZeneca's move to Cambridge in the UK, the Telegraph has it for you right here. It's a rousing, bullish take on the whole Cambridge scene, but as John Carroll points out at FierceBiotech, it does leave out a few things about AZ. First, though, the froth:

George Freeman MP. . . the Coalition's adviser on life sciences, and Dr Andy Richards, boss of the Cambridge Angels, who has funded at least 20 of the city's start–ups, are among its champions.

"The big pharmaceutical model is dead, we have to help the big companies reinvent themselves," said Freeman. "Cambridge is leading the way on how do this, on research and innovation."

The pair are convinced that the burgeoning "Silicon Fen" is rapidly becoming the global centre of pharma, biotech, and now IT too. Richards says the worlds of bioscience and IT are "crashing together" and revolutionising companies and consumers. Tapping his mobile phone, he says: "This isn't just a phone, it could hold all sorts of medical information, too, on your agility and reactions. This rapid development is what it's all about."

. . .St John's College set up another park where Autonomy started and more than 50 companies are now based. As we pass, on cue a red Ferrari zooms out. "We didn't see Ferraris when I was a boy," says Freeman. "Just old academics on their bikes."
He adds: "That's the great thing about tech, you can suddenly get it, make it commercial and you've got £200m. You don't have to spend four generations of a German family building Mittelstand."

I don't doubt that Cambridge is doing well. There are a lot of very good people in the area, and some very good ideas and companies. But I do doubt that Cambridge is becoming the global hub of pharma, biotech, and IT all at the same time. And that "crashing together" stuff is the kind of vague rah-rah that politicians and developers can spew out on cue. It sounds very exciting until you start asking for details. And it's not like they haven't heard that sort of thing before in Britain. Doesn't anyone remember the "white heat" of the new technological revolution of the 1960s?

But the future of Cambridge and the future of AstraZeneca may be two different things. Specifically, Pascal Soirot of AZ is quoted in the Telegraph piece as saying that "We've lost some of our scientific confidence," and that the company is hoping to get it back by moving to the area. Let's take a little time to think about that statement, because the closer you look at it, the stranger it is. It assumes that (A) there is such a thing as "scientific confidence", and (B) that it can be said to apply to an entire company, and (C) that a loss of it is what ails AstraZeneca, and (D) that one can retrieve it by moving the whole R&D site to a hot site.

Now, assumption (A) seems to me to be the most tenable of the bunch. I've written about that very topic here. It seems clear to me that people who make big discoveries have to be willing to take risks, to look like fools if they're wrong, and to plunge ahead through their own doubts and those of others. That takes confidence, sometimes so much that it rubs other people the wrong way.

But do these traits apply to entire organizations? That's assumption (B), and there things get fuzzy. There do seem to be differences in how much risk various drug discovery shops are willing to take on, but changing a company's culture has been the subject of so many, many management books that it's clearly not something that anyone knows how to do well. The situation is complicated by the disconnects between the public statements of higher executives about the spirits and cultures of their companies, versus the evidence on the ground. In fact, the more time the higher-ups spend talking about how incredibly entrepreneurial and focused everyone at the place is, the more you should worry. If everyone's really busy discovering things, you don't have time to wave the pom-poms.

Now to assumption (C), the idea that a lack of such confidence is AstraZeneca's problem. Not being inside the company, I can't speak to that directly, but from outside, it looks like AZ's problem is that they've had too many drugs fail in Phase III and that they've spent way too much money doing it. And it's very hard to say how much of that has been just bad luck, how much of it was self-deception, how much can be put down to compound selection or target selection issues, and so on. Lack of scientific confidence might imply that the company was too cautious in some of these areas, taking too long for things that wouldn't pay off enough. I don't know if that's what Pascal Soirot is trying to imply; I'm not all that sure that he knows, himself.

This brings us to assumption (D), Getting One's Mojo Back through a big move. I have my suspicions about this strategy from the start, since it's the plot of countless chick-lit books and made-for-cable movies. But I'll wave away the fumes of incense and suntan oil, avert my eyes from the jump cuts of the inspirational montage scenes, and move on to asking how this might actually work. You'd think that I might have some idea, since I actually work in Cambridge in the US, where numerous companies are moving in for just these sorts of stated reasons. They're not totally wrong. Areas like the two Cambridges, the San Francisco Bay area, and a few others do have things going for them. My own guess is that a big factor is the mobility and quality of the local workforce, and that the constant switching around between the various companies, academic institutions, and other research sites keeps things moving, intellectually. That's a pretty hand-waving way of putting it, but I don't have a better one.

What could be an even bigger factor is a startup culture, the ability of new ideas to get a hearing and get some funding in the real world. That effect, though, is surely most noticeable in the smaller company space - I'm still not sure how it works out for the branch offices of larger firms that locate in to be where things are happening. If I had to guess, I think all these things still help out the larger outfits, but in an attenuated way that is not easy to quantify. And if the culture of the Big Company Mothership is nasty enough to start with, I'm sure it can manage to cancel out whatever beneficial effects might exist.

So I don't know what moving to Cambridge to a big new site is going to do for AstraZeneca. And it's worth remembering that it's going to take several years for any such move to be realized - who knows what will happen between now and then? The whole thing might help, it might hurt, it might make little difference (except in the massive cost and disruption). That disruption might be a feature as much as a bug - if you're trying to shake a place up, you have to shake it up - but I would wonder about anyone who feels confident about how things will actually work out.

Comments (33) + TrackBacks (0) | Category: Drug Industry History | Who Discovers and Why

March 19, 2013

Affymax In Trouble

Email This Entry

Posted by Derek

Affymax has had a long history, and it's rarely been dull. The company was founded in 1988, back in the very earliest flush of the Combichem era, and in its early years it (along with Pharmacopeia) was what people thought of when they thought of that whole approach. Huge compound libraries produced (as much as possible) by robotics, equally huge screening efforts to deal with all those compounds - this stuff is familiar to us now (all too familiar, in many cases), but it was new then. If you weren't around for it, you'll have to take the word of those who were that it could all be rather exciting and scary at first: what if the answer really was to crank out huge piles of amides, sulfonamides, substituted piperazines, aminotriazines, oligopeptides, and all the other "build-that-compound-count-now!" classes? No one could say for sure that it wasn't. Not yet.

Glaxo bought Affymax back in 1995, about the time they were buying Wellcome, which makes it seem like a long time ago, and perhaps it was. At any rate, they kept the combichem/screening technology and spun a new version of Affymax back out in 2001 to a syndicate of investors. For the past twelve years, that Affymax has been in the drug discovery and development business on its own.

And as this page shows, the story through most of those years has been peginesatide (brand name Omontys, although it was known as Hematide for a while as well). This is synthetic peptide (with some unnatural amino acids in it, and a polyethylene glycol tail) that mimics erythropoetin. What with its cyclic nature (a couple of disulfide bonds), the unnatural residues, and the PEGylation, it's a perfect example of what you often have to do to make an oligopeptide into a drug.

But for quite a while there, no one was sure whether this one was going to be a drug or not. Affymax had partnered with Takeda along the way, and in 2010 the companies announced some disturbing clinical data in kidney patients. While Omontys did seem to help with anemia, it also seemed to have a worse safety profile than Amgen's EPO, the existing competition. The big worry was cardiovascular trouble (which had also been a problem with EPO itself and all the other attempted competition in that field). A period of wranging ensued, with a lot of work on the clinical data and a lot of back-and-forthing with the FDA. In the end, the drug was actually approved one year ago, albeit with a black-box warning about cardiovascular safety.

But over the last year, about 25,000 patients got the drug, and unfortunately, 19 of them had serious anaphylactic reactions to it within the first half hour of exposure. Three patients died as a result, and some others nearly did. That is also exactly what one worries about with a synthetic peptide derivative: it's close enough to the real protein to do its job, but it's different enough to set off the occasional immune response, and the immune system can be very serious business indeed. Allergic responses had been noted in the clinical trials, but I think that if you'd taken bets last March, people would have picked the cardiovascular effects as the likely nemesis, not anaphylaxis. But that's not how it's worked out.

Takeda and Affymax voluntarily recalled the drug last month. And that looked like it might be all for the company, because this has been their main chance for some years now. Sure enough, the announcement has come that most of the employees are being let go. And it includes this language, which is the financial correlate of Cheyne-Stokes breathing:

The company also announced that it will retain a bank to evaluate strategic alternatives for the organization, including the sale of the company or its assets, or a corporate merger. The company is considering all possible alternatives, including further restructuring activities, wind-down of operations or even bankruptcy proceedings.

I'm sorry to hear it. Drug development is very hard indeed.

Comments (11) + TrackBacks (0) | Category: Business and Markets | Cardiovascular Disease | Drug Development | Drug Industry History | Toxicology

March 14, 2013

Does Baldness Get More Funding Than Malaria?

Email This Entry

Posted by Derek

OK, let's fact-check Bill Gates today, shall we?

Capitalism means that there is much more research into male baldness than there is into diseases such as malaria, which mostly affect poor people, said Bill Gates, speaking at the Royal Academy of Engineering's Global Grand Challenges Summit.

"Our priorities are tilted by marketplace imperatives," he said. "The malaria vaccine in humanist terms is the biggest need. But it gets virtually no funding. But if you are working on male baldness or other things you get an order of magnitude more research funding because of the voice in the marketplace than something like malaria."

Gates' larger point, that tropical diseases are an example of market failure, stands. But I don't think this example does. I have never yet worked on any project in industry that had anything to do with baldness, while I have actually touched on malaria. Looking around the scientific literature, I see many more publications on potential malaria drugs than I see potential baldness drugs (in fact, I'm not sure if I've ever seen anything on the latter, after minoxidil - and its hair-growth effects were discovered by accident during a cardiovascular program). Maybe I'm reading the wrong journals.

But then, Gates also seems to buy into the critical-shortage-of-STEM idea:

With regards to encouraging more students into STEM education, Gates said: "It's kind of surprising that we have such a deficit of people going into those fields. Look at where you can have the most interesting job that pays well and will have impact on society -- all three of those things line up to say science and engineering and yet in most rich countries we see decline. Asia is an exception."

The problem is, there aren't as many of these interesting, well-paying jobs around as there used to be. Any discussion of the STEM education issue that doesn't deal with that angle is (to say the least) incomplete.

Comments (28) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Infectious Diseases

March 4, 2013

Putting the (Hard) Chemistry Back in Med Chem

Email This Entry

Posted by Derek

While I'm on the subject of editorials, Takashi Tsukamoto of Johns Hopkins has one out in ACS Medicinal Chemistry Letters. Part of it is a follow-up to my own trumpet call in the journal last year (check the top of the charts here; the royalties are just flowing in like a river of gold, I can tell you). Tsukamoto is wondering, though, if we aren't exploring chemical space the way that we should:

One of the concerns is the likelihood of identifying drug-like ligands for a given therapeutic target, the so-called “druggability” of the target, has been defined by these compounds, representing a small section of drug-like chemical space. Are aminergic G protein coupled receptors (GPCRs) actually more druggable than other types of targets? Or are we simply overconcentrating on the area of chemical space which contains compounds likely to hit aminergic GPCRs? Is it impossible to disrupt protein–protein interactions with a small molecule? Or do we keep missing the yet unexplored chemical space for protein–protein interaction modulators because we continue making compounds similar to those already synthesized?

. . .If penicillin-binding proteins are presented as new therapeutic targets (without the knowledge of penicillin) today, we would have a slim chance of discovering β-lactams through our current medicinal chemistry practices. Penicillin-binding proteins would be unanimously considered as undruggable targets. I sometimes wonder how many other potentially significant therapeutic targets have been labeled as undruggable just because the chemical space representing their ligands has never been explored. . .

Good questions. I (and others) have had similar thoughts. And I'm always glad to see people pushing into under-represented chemical space (macrocycles being a good example).

The problem is, chemical space is large, and time (and money) are short. Given the pressures that research has been under, it's no surprise that everyone has been reaching for whatever will generate the most compounds in the shortest time - which trend, Tsukamoto notes, makes the whole med-chem enterprise that much easier to outsource to places with cheaper labor. (After all, if there's not so much skill involved in cranking out amides and palladium couplings, why not?)

My advice in the earlier editorial about giving employers something they can't buy in China and India still holds, but (as Tsukamoto says), maybe one of those things could (or should) be "complicated chemistry that makes unusual structures". Here's a similar perspective from Derek Tan at Sloan-Kettering, also referenced by Tsukamoto. It's an appealing thought, that we can save medicinal chemistry by getting back to medicinal chemistry. It may even be true. Let's hope so.

Comments (25) + TrackBacks (0) | Category: Chemical News | Drug Assays | Drug Industry History

February 21, 2013

The Hard Targets: How Far Along Are We?

Email This Entry

Posted by Derek

I wrote here about whole classes of potential drug targets that we really don't know how to deal with. It's been several years since then, and I don't think that the situation has improved all that much. (In 2011 I reviewed a book that advocated attacking these as a way forward for drug discovery).

Protein-protein interactions are still the biggest of these "undruggable targets", and there has been some progress made there. But I think we still don't have much in the way of general knowledge in this area. Every PPI target is its own beast, and you get your leads where you can, if you can. Transcription factors are the bridge between these and the protein-nucleic acid targets, which have been even harder to get a handle on (accounting for their appearance on lists like this one).

There are several chicken-and-egg questions in these areas. Getting chemical matter seems to be hard (that's something we can all agree on). Is that because we don't have compound collections that are biased the right way? If so, what the heck would the right way look like? Is is because we have trouble coming up with good screening techniques for some of these targets? (And if so, what are we lacking?) How much of the slower progress in these areas has been because of their intrinsic difficulty, and how much has been because people tend to avoid them (because of their, well, intrinsic difficulty?) If we all had our backs to the wall, could we do better, or would we generate just a lot more of the same?

I ask these questions because for years now, a lot of people in the industry have been saying that we need to get more of a handle on these things, because the good ol' small-molecule binding sites are getting scarcer. Am I right to think that we're still at the stage of telling each other this, or are there advances that I haven't kept up with?

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Industry History

January 25, 2013

CETP, Alzheimer's, Monty Hall, and Roulette. And Goats.

Email This Entry

Posted by Derek

CETP, now there's a drug target that has incinerated a lot of money over the years. Here's a roundup of compounds I posted on back last summer, with links to their brutal development histories. I wondered here about what's going to happen with this class of compounds: will one ever make it as a drug? If it does, will it just end up telling us that there are yet more complications in human lipid handling that we didn't anticipate?

Well, Merck and Lilly are continuing their hugely expensive, long-running atempts to answer these questions. Here's an interview with Merck's Ken Frazier in which he sounds realistic - that is, nervous:

Merck CEO Ken Frazier, speaking in Davos on the sidelines of the World Economic Forum, said the U.S. drugmaker would continue to press ahead with clinical research on HDL raising, even though the scientific case so far remained inconclusive.

"The Tredaptive failure is another piece of evidence on the side of the scale that says HDL raising hasn't yet been proven," he said.

"I don't think by any means, though, that the question of HDL raising as a positive factor in cardiovascular health has been settled."

Tredaptive, of course, hit the skids just last month. And while its mechanism is not directly relevant to CETP inhibition (I think), it does illustrate how little we know about this area. Merck's anacetrapib is one of the ugliest-looking drug candidates I've ever seen (ten fluorines, three aryl rings, no hydrogen bond donors in sight), and Lilly's compound is only slightly more appealing.

But Merck finds itself having to bet a large part of the company's future in this area. Lilly, for its part, is betting similarly, and most of the rest of their future is being plunked down on Alzheimer's. And these two therapeutic areas have a lot in common: they're both huge markets that require huge clinical trials and rest on tricky fundamental biology. The huge market part makes sense; that's the only way that you could justify the amount of development needed to get a compound through. But the rest of the setup is worth some thought.

Is this what Big Pharma has come to, then? Placing larger and larger bets in hopes of a payoff that will make it all work out? If this were roulette, I'd have no trouble diagnosing someone who was using a Martingale betting system. There are a few differences, although I'm not sure how (or if) they cancel out For one thing, the Martingale gambler is putting down larger and larger amounts of money in an attempt to win the same small payout (the sum of the initial bet!) Pharma is at least chasing a larger jackpot. But the second difference is that the house advantage at roulette is a fixed 5.26% (at least in the US), which is ruinous, but is at least a known quantity.

But mentioning "known quantities" brings up a third difference. The rules of casino games don't change (unless an Ed Thorp shows up, which was a one-time situation). The odds of drug discovery are subject to continuous change as we acquire more knowledge; it's more like the Monty Hall Paradox. The question is, have the odds changed enough in CETP (or HDL-raising therapies in general) or Alzheimer's to make this a reasonable wager?

For the former, well, maybe. There are theories about what went wrong with torcetrapib (a slight raising of blood pressure being foremost, last I heard), and Merck's compound seems to be dodging those. Roche's failure with dacetrapib is worrisome, though, since the official reason there was sheer lack of efficacy in the clinic. And it's clear that there's a lot about HDL and LDL that we don't understand, both their underlying biology and their effects on human health when they're altered. So (to put things in terms of the Monty Hall problem), a tiny door has been opened a crack, and we may have caught a glimpse of some goat hair. But it could have been a throw rug, or a gorilla; it's hard to say.

What about Alzheimer's? I'm not even sure if we're learned as much as we have with CETP. The immunological therapies have been hard to draw conclusions from, because hey, it's the immune system. Every antibody is different, and can do different things. But the mechanistic implications of what we've seen so far are not that encouraging, unless, of course, you're giving interviews as an executive of Eli Lilly. The small-molecule side of the business is a bit easier to interpret; it's an unrelieved string of failures, one crater after another. We've learned a lot about Alzheimer's therapies, but what we've mostly learned is that nothing we've tried has worked much. In Monty Hall terms, the door has stayed shut (or perhaps has opened every so often to provide a terrifying view of the Void). At any rate, the flow of actionable goat-delivered information has been sparse.

Overall, then, I wonder if we really are at the go-for-the-biggest-markets-and-hope-for-the-best stage of research. The big companies are the ones with enough resources to tackle the big diseases; that's one reason we see them there. But the other reason is that the big diseases are the only things that the big companies think can rescue them.

Comments (4) + TrackBacks (0) | Category: Alzheimer's Disease | Cardiovascular Disease | Clinical Trials | Drug Development | Drug Industry History

January 24, 2013

Daniel Vasella Steps Down at Novartis

Email This Entry

Posted by Derek

So Daniel Vasella, longtime chairman of Novartis, has announced that he's stepping down. (He'll be replaced by Joerg Reinhardt, ex-Bayer, who was at Novartis before that). Vasella's had a long run. People on the discovery side of the business will remember him especially for the decision to base the company's research in Cambridge, which has led to (or at the very least accelerated the process of) many of the other big companies putting up sites there as well. Novartis is one of the most successful large drug companies in the world, avoiding the ferocious patent expiration woes of Lilly and AstraZeneca, and avoiding the gigantic merger disruptions of many others.

That last part, though, is perhaps an accident. Novartis did buy a good-sized stake in Roche at one point, and has apparently made, in vain, several overtures over the years to the holders of Roche's voting shares (many of whom are named "Hoffman-LaRoche" and live in very nice parts of Switzerland). And Vasella did oversee the 1996 merger between Sandoz and Ciba-Geigy that created Novartis itself, and he wasn't averse to big acquisitions per se, as the 2006 deal to buy Chiron shows.

It's those very deals, though, that have some investors cheering his departure. Reading that article, which is written completely from the investment side of the universe, is quite interesting. Try this out:

“He’s associated with what we can safely say are pretty value-destructive acquisitions,” said Eleanor Taylor-Jolidon, who manages about 400 million Swiss francs at Union Bancaire Privee in Geneva, including Novartis shares. “Everybody’s hoping that there’s going to be a restructuring now. I hope there will be a restructuring.” . . .

. . .“The shares certainly reacted to the news,” Markus Manns, who manages a health-care fund that includes Novartis shares at Union Investment in Frankfurt, said in an interview. “People are hoping Novartis will sell the Roche stake or the vaccines unit and use the money for a share buyback.”

Oh yes indeed, that's what we're all hoping for, isn't it? A nice big share buyback? And a huge restructuring, one that will stir the pot from bottom to top and make everyone wonder if they'll have a job or where it might be? Speed the day!

No, don't. All this illustrates the different world views that people bring to this business. The investors are looking to maximize their returns - as they should - but those of us in research see the route to maximum returns as going through the labs. That's what you'd expect from us, of course, but are we wrong? A drug company is supposed to find and develop drugs, and how else are you to do that? The investment community might answer that differently: a public drug company, they'd say, is like any other public company. It is supposed to produce value for its shareholders. If it can do that by producing drugs, then great, everything's going according to plan - but if there are other more reliable ways to produce that value, then the company should (must, in fact) avail itself of them.

And there's the rub. Most methods of making a profit are more reliable than drug discovery. Our returns on invested capital for internal projects are worrisome. Even when things work, it's a very jumpy, jerky business, full of fits and starts, with everything new immediately turning into a ticking bomb of a wasting asset due to patent expiry. Some investors understand this and are willing to put up with it in the hopes of getting in on something big. Other investors just want the returns to be smoother and more predictable, and are impatient for the companies to do something to make that happen. And others just avoid us entirely.

Comments (18) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

December 3, 2012

Marcia Angell's Interview: I Just Can't

Email This Entry

Posted by Derek

I have tried to listen to this podcast with Marcia Angell, on drug companies and their research, but I cannot seem to make it all the way through. I start shouting at the screen, at the speakers, at the air itself. In case you're wondering about whether I'm overreacting, at one point she makes the claim that drug companies don't do much innovation, because most of our R&D budget is spent on clinical trials, and "everyone knows how to do a clinical trial". See what I mean?

Angell has many very strongly held opinions on the drug business. But her take on R&D has always seemed profoundly misguided to me. From what I can see, she thinks that identifying a drug target is the key step, and that everything after that is fairly easy, fairly cheap, and very, very profitable. This is not correct. Really, really, not correct. She (and those who share this worldview, such as her co-author) believe that innovation has fallen off in the industry, but that this has happened mostly by choice. Considering the various disastrously expensive failures the industry has gone through while trying to expand into new diseases, new indications, and new targets, I find this line of argument hard to take.

So, I see, does Alex Tabarrok. I very much enjoyed that post; it does some of the objecting for me, and illustrates why I have such a hard time dealing point-by-point with Angell and her ilk. The misconceptions are large, various, and ever-shifting. Her ideas about drug marketing costs, which Tabarrok especially singles out, are a perfect example (and see some of those other links to my old posts, where I make some similar arguments to his).

So no, I don't think that Angell has changed her opinions much. I sure haven't changed mine.

Comments (59) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices | Why Everyone Loves Us

November 30, 2012

A Broadside Against The Way We Do Things Now

Email This Entry

Posted by Derek

There's a paper out in Drug Discovery Today with the title "Is Poor Research the Cause of Declining Productivity in the Drug Industry? After reviewing the literature on phenotypic versus target-based drug discovery, the author (Frank Sams-Dodd) asks (and has asked before):

The consensus of these studies is that drug discovery based on the target-based approach is less likely to result in an approved drug compared to projects based on the physiological- based approach. However, from a theoretical and scientific perspective, the target-based approach appears sound, so why is it not more successful?

He makes the points that the target-based approach has the advantages of (1) seeming more rational and scientific to its practitioners, especially in light of the advances in molecular biology over the last 25 years, and (2) seeming more rational and scientific to the investors:

". . .it presents drug discovery as a rational, systematic process, where the researcher is in charge and where it is possible to screen thousands of compounds every week. It gives the image of industrialisation of applied medical research. By contrast, the physiology-based approach is based on the screening of compounds in often rather complex systems with a low throughput and without a specific theory on how the drugs should act. In a commercial enterprise with investors and share-holders demanding a fast return on investment it is natural that the drug discovery efforts will drift towards the target-based approach, because it is so much easier to explain the process to others and because it is possible to make nice diagrams of the large numbers of compounds being screened.

This is the "Brute Force bias". And he goes on to another key observation: that this industrialization (or apparent industrialization) meant that there were a number of processes that could be (in theory) optimized. Anyone who's been close to a business degree knows how dear process optimization is to the heart of many management theorists, consultants, and so on. And there's something to that, if you're talking about a defined process like, say, assembling pickup trucks or packaging cat litter. This is where your six-sigma folks come in, your Pareto analysis, your Continuous Improvement people, and all the others. All these things are predicated on the idea that there is a Process out there.

See if this might sound familiar to anyone:

". . .the drug dis- covery paradigm used by the pharmaceutical industry changed from a disease-focus to a process-focus, that is, the implementation and organisation of the drug discovery process. This meant that process-arguments became very important, often to the point where they had priority over scientific considerations, and in many companies it became a requirement that projects could conform to this process to be accepted. Therefore, what started as a very sensible approach to drug discovery ended up becoming the requirement that all drug dis- covery programmes had to conform to this approach – independently of whether or not sufficient information was available to select a good target. This led to dogmatic approaches to drug discovery and a culture developed, where new projects must be presented in a certain manner, that is, the target, mode-of-action, tar- get-validation and screening cascade, and where the clinical manifestation of the disease and the biological basis of the disease at systems-level, that is, the entire organism, were deliberately left out of the process, because of its complexity and variability.

But are we asking too much when we declare that our drugs need to work through single defined targets? Beyond that, are we even asking too much when we declare that we need to understand the details of how they work at all? Many of you will have had such thoughts (and they've been expressed around here as well), but they can tend to sound heretical, especially that second one. But that gets to the real issue, the uncomfortable, foot-shuffling, rather-think-about-something-else question: are we trying to understand things, or are we trying to find drugs?

"False dichotomy!", I can hear people shouting. "We're trying to do both! Understanding how things work is the best way to find drugs!" In the abstract, I agree. But given the amount there is to understand, I think we need to be open to pushing ahead with things that look valuable, even if we're not sure why they do what they do. There were, after all, plenty of drugs discovered in just that fashion. A relentless target-based environment, though, keeps you from finding these things at all.

What it does do, though, is provide vast opportunities for keeping everyone busy. And not just "busy" in the sense of working on trivia, either: working out biological mechanisms is very, very hard, and in no area (despite decades of beavering away) can we say we've reached the end and achieved anything like a complete picture. There are plenty of areas that can and will soak up all the time and effort you can throw at them, and yield precious little in the way of drugs at the end of it. But everyone was working hard, doing good science, and doing what looked like the right thing.

This new paper spends quite a bit of time on the mode-of-action question. It makes the point that understanding the MoA is something that we've imposed on drug discovery, not an intrinsic part of it. I've gotten some funny looks over the years when I've told people that there is no FDA requirement for details of a drug's mechanism. I'm sure it helps, but in the end, it's efficacy and safety that carry the day, and both of those are determined empirically: did the people in the clinical trials get better, or worse?

And as for those times when we do have mode-of-action information, well, here are some fighting words for you:

". . .the ‘evidence’ usually involves schematic drawings and flow-diagrams of receptor complexes involving the target. How- ever, it is almost never understood how changes at the receptor or cellular level affect the phy- siology of the organism or interfere with the actual disease process. Also, interactions between components at the receptor level are known to be exceedingly complex, but a simple set of diagrams and arrows are often accepted as validation for the target and its role in disease treatment even though the true interactions are never understood. What this in real life boils down to is that we for almost all drug discovery programmes only have minimal insight into the mode-of-action of a drug and the biological basis of a disease, meaning that our choices are essentially pure guess-work.

I might add at this point that the emphasis on defined targets and mode of action has been so much a part of drug discovery in recent times that it's convinced many outside observers that target ID is really all there is to it. Finding and defining the molecular target is seen as the key step in the whole process; everything past that is just some minor engineering (and marketing, naturally). That fact that this point of view is a load of fertilizer has not slowed it down much.

I think that if one were to extract a key section from this whole paper, though, this one would be a good candidate:

". . .it is not the target-based approach itself that is flawed, but that the focus has shifted from disease to process. This has given the target-based approach a dogmatic status such that the steps of the validation process are often conducted in a highly ritualised manner without proper scientific analysis and questioning whether the target-based approach is optimal for the project in question.

That's one of those "Don't take this in the wrong way, but. . ." statements, which are, naturally, always going to be taken in just that wrong way. But how many people can deny that there's something to it? Almost no one denies that there's something not quite right, with plenty of room for improvement.

What Sams-Dodd has in mind for improvement is a shift towards looking at diseases, rather than targets or mechanisms. For many people, that's going to be one of those "Speak English, man!" moments, because for them, finding targets is looking at diseases. But that's not necessarily so. We would have to turn some things on their heads a bit, though:

In recent years there have been considerable advances in the use of automated processes for cell-culture work, automated imaging systems for in vivo models and complex cellular systems, among others, and these developments are making it increasingly possible to combine the process-strengths of the target-based approach with the disease-focus of the physiology-based approach, but again these technologies must be adapted to the research question, not the other way around.

One big question is whether the investors funding our work will put up with such a change, or with such an environment even if we did establish it. And that gets back to the discussion of Andrew Lo's securitization idea, the talk around here about private versus public financing, and many other topics. Those I'll reserve for another post. . .

Comments (30) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | Who Discovers and Why

November 29, 2012

When Drug Launches Go Bad

Email This Entry

Posted by Derek

For those connoisseurs of things that have gone wrong, here's a list of the worst drug launches of recent years. And there are some rough ones in there, such as Benlysta, Provenge, and (of course) Makena. And from an aesthetic standpoint, it's hard not to think that if you name your drug Krystexxa that you deserve what you get. Read up and try to avoid being part of such a list yourself. . .

Comments (8) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices

Roche Repurposes

Email This Entry

Posted by Derek

Another drug repurposing initiative is underway, this one between Roche and the Broad Institute. The company is providing 300 failed clinical candidates to be run through new assays, in the hopes of finding a use for them.

I hope something falls out of this, because any such compounds will naturally have a substantial edge in further development. They should all have been through toxicity testing, they've had some formulations work done on them, a decent scale-up route has been identified, and so on. And many of these candidates fell out in Phase II, so they've even been in human pharmacokinetics.

On the other hand (there's always another hand), you could also say that this is just another set of 300 plausible-looking compounds, and what does a 300-compound screening set get you? The counterargument to this is that these structures have not only been shown to have good absorption and distribution properties (no small thing!), they've also been shown to bind well to at least one target, which means that they may well be capable of binding well to other similar motifs in other active sites. But the counterargument to that is that now you've removed some of those advantages in the paragraph above, because any hits will now come with selectivity worries, since they come with guaranteed activity against something else.

This means that the best case for any repurposed compound is for its original target to be good for something unanticipated. So that Roche collection of compounds might also be thought of as a collection of failed targets, although I doubt if there are a full 300 of those in there. Short of that, every repurposing attempt is going to come with its own issues. It's not that I think these shouldn't be tried - why not, as long as it doesn't cost too much - but things could quickly get more complicated than they might have seemed. And that's a feeling that any drug discovery researcher will recognize like an old, er, friend.

For more on the trickiness of drug repurposing, see John LaMattina here and here. And the points he raises get to the "as long as it doesn't cost too much" line in the last paragraph. There's opportunity cost involved here, too, of course. When the Broad Institute (or Stanford, or the NIH) screens old pharma candidates for new uses, they're doing what a drug company might do itself, and therefore possibly taking away from work that only they could be doing instead. Now, I think that the Broad (for example) already has a large panel of interesting screens set up, so running the Roche compounds through them couldn't hurt, and might not take that much more time or effort. So why not? But trying to push repurposing too far could end up giving us the worst of both worlds. . .

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

November 13, 2012

Nassim Taleb on Scientific Discovery

Email This Entry

Posted by Derek

There's an interesting article posted on Nassim Taleb's web site, titled "Understanding is a Poor Substitute for Convexity (Antifragility)". It was recommended to me by a friend, and I've been reading it over for its thoughts on how we do drug research. (This would appear to be an excerpt from, or summary of, some of the arguments in the new book Antifragile: Things That Gain from Disorder, which is coming out later this month).

Taleb, of course, is the author of The Black Swan and Fooled by Randomness, which (along with his opinions about the recent financial crises) have made him quite famous.

So this latest article is certainly worth reading, although much of it reads like the title, that is, written in fluent and magisterial Talebian. This blog post is being written partly for my own benefit, so that I make sure to go to the trouble of a translation into my own language and style. I've got my idiosyncracies, for sure, but I can at least understand my own stuff. (And, to be honest, a number of my blog posts are written in that spirit, of explaining things to myself in the process of explaining them to others).

Taleb starts off by comparing two different narratives of scientific discovery: luck versus planning. Any number of works contrast those two. I'd say that the classic examples of each (although Taleb doesn't reference them in this way) are the discovery of penicillin and the Manhattan Project. Not that I agree with either of those categorizations - Alexander Fleming, as it turns out, was an excellent microbiologist, very skilled and observant, and he always checked old culture dishes before throwing them out just to see what might turn up. And, it has to be added, he knew what something interesting might look like when he saw it, a clear example of Pasteur's quote about fortune and the prepared mind. On the other hand, the Manhattan Project was a tremendous feat of applied engineering, rather than scientific discovery per se. The moon landings, often used as a similar example, are also the exact sort of thing. The underlying principles of nuclear fission had been worked out; the question was how to purify uranium isotopes to the degree needed, and then how to bring a mass of the stuff together quickly and cleanly enough. These processes needed a tremendous amount of work (it wasn't obvious how to do either one, and multiple approaches were tried under pressure of time), but the laws of (say) gaseous diffusion were already known.

But when you look over the history of science, you see many more examples of fortunate discoveries than you see of planned ones. Here's Taleb:

The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics —even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.

"Opaque and nonlinear" just about sums up a lot of drug discovery and development, let me tell you. But Taleb goes on to say that "trial and error" is a misleading phrase, because it tends to make the two sound equivalent. What's needed is an asymmetry: the errors need to be as painless as possible, compared to the payoffs of the successes. The mathematical equivalent of this property is called convexity; a nonlinear convex function is one with larger gains than losses. (If they're equal, the function is linear). In research, this is what allows us to "harvest randomness", as the article puts it.

An example of such a process is biological evolution: most mutations are harmless and silent. Even the harmful ones will generally just kill off the one organism with the misfortune to bear them. But a successful mutation, one that enhances survival and reproduction, can spread widely. The payoff is much larger than the downside, and the mutations themselves come along for free, since some looseness is built into the replication process. It's a perfect situation for blind tinkering to pay off: the winners take over, and the losers disappear.

Taleb goes on to say that "optionality" is another key part of the process. We're under no obligation to follow up on any particular experiment; we can pick the one that worked best and toss the rest. This has its own complications, since we have our own biases and errors of judgment to contend with, as opposed to the straightforward questions of evolution ("Did you survive? Did you breed?"). But overall, it's an important advantage.

The article then introduces the "convexity bias", which is defined as the difference between a system with equal benefit and harm for trial and error (linear) and one where the upsides are higher (nonlinear). The greater the split between those two, the greater the convexity bias, and the more volatile the environment, the great the bias is as well. This is where Taleb introduces another term, "antifragile", for phenomena that have this convexity bias, because they're equipped to actually gain from disorder and volatility. (His background in financial options is apparent here). What I think of at this point is Maxwell's demon, extracting useful work from randomness by making decisions about which molecules to let through his gate. We scientists are, in this way of thinking, members of the same trade union as Maxwell's busy creature, since we're watching the chaos of experimental trials and natural phenomena and letting pass the results we find useful. (I think Taleb would enjoy that analogy). The demon is, in fact, optionality manifested and running around on two tiny legs.

Meanwhile, a more teleological (that is, aimed and coherent) approach is damaged under these same conditions. Uncertainty and randomness mess up the timelines and complicate the decision trees, and it just gets worse and worse as things go on. It is, by these terms, fragile.

Taleb ends up with seven rules that he suggests can guide decision making under these conditions. I'll add my own comments to these in the context of drug research.

(1) Under some conditions, you'd do better to improve the payoff ratio than to try to increase your knowledge about what you're looking for. One way to do that is to lower the cost-per-experiment, so that a relatively fixed payoff then is larger in comparison. The drug industry has realized this, naturally: our payoffs are (in most cases) somewhat out of our control, although the marketing department tries as hard as possible. But our costs per experiment range from "not cheap" to "potentially catastrophic" as you go from early research to Phase III. Everyone's been trying to bring down the costs of later-stage R&D for just these reasons.

(2) A corollary is that you're better off with as many trials as possible. Research payoffs, as Taleb points out, are very nonlinear indeed, with occasional huge winners accounting for a disproportionate share of the pool. If we can't predict these - and we can't - we need to make our nets as wide as possible. This one, too, is appreciated in the drug business, but it's a constant struggle on some scales. In the wide view, this is why the startup culture here in the US is so important, because it means that a wider variety of ideas are being tried out. And it's also, in my view, why so much M&A activity has been harmful to the intellectual ecosystem of our business - different approaches have been swallowed up, and they they disappear as companies decide, internally, on the winners.

And inside an individual company, portfolio management of this kind is appreciated, but there's a limit to how many projects you can keep going. Spread yourself too thin, and nothing will really have a chance of working. Staying close to that line - enough projects to pick up something, but not so many as to starve them all - is a full-time job.

(3) You need to keep your "optionality" as strong as possible over as long a time as possible - that is, you need to be able to hit a reset button and try something else. Taleb says that plans ". . .need to stay flexible with frequent ways out, and counter to intuition, be very short term, in order to properly capture the long term. Mathematically, five sequential one-year options are vastly more valuable than a single five-year option." I might add, though, that they're usually priced accordingly (and as Taleb himself well knows, looking for those moments when they're not priced quite correctly is another full-time job).

(4) This one is called "Nonnarrative Research", which means the practice of investing with people who have a history of being able to do this sort of thing, regardless of their specific plans. And "this sort of thing" generally means a lot of that third recommendation above, being able to switch plans quickly and opportunistically. The history of many startup companies will show that their eventual success often didn't bear as much relation to their initial business plan as you might think, which means that "sticking to a plan", as a standalone virtue, is overrated.

At any rate, the recommendation here is not to buy into the story just because it's a good story. I might draw the connection here with target-based drug discovery, which is all about good stories.

(5) Theory comes out of practice, rather than practice coming out of theory. Ex post facto histories, Taleb says, often work the story around to something that looks more sensible, but his claim is that in many fields, "tinkering" has led to more breakthroughs than attempts to lay down new theory. His reference is to this book, which I haven't read, but is now on my list.

(6) There's no built-in payoff for complexity (or for making things complex). "In academia," though, he says, "there is". Don't, in other words, be afraid of what look like simple technologies or innovations. They may, in fact, be valuable, but have been ignored because of this bias towards the trickier-looking stuff. What this reminds me of is what Philip Larkin said he learned by reading Thomas Hardy: never be afraid of the obvious.

(7) Don't be afraid of negative results, or paying for them. The whole idea of optionality is finding out what doesn't work, and ideally finding that out in great big swaths, so we can narrow down to where the things that actually work might be hiding. Finding new ways to generate negative results quickly and more cheaply, which can means new ways to recognize them earlier, is very valuable indeed.

Taleb finishes off by saying that people have criticized such proposals as the equivalent of buying lottery tickets. But lottery tickets, he notes, are terribly overpriced, because people are willing to overpay for a shot at a big payoff on long odds. But lotteries have a fixed upper bound, whereas R&D's upper bound is completely unknown. And Taleb gets back to his financial-crisis background by pointing out that the history of banking and finance points out the folly of betting against long shots ("What are the odds of this strategy suddenly going wrong?"), and that in this sense, research is a form of reverse banking.

Well, those of you out there who've heard the talk I've been giving in various venues (and in slightly different versions) the last few months may recognize that point, because I have a slide that basically says that drug research is the inverse of Wall Street. In finance, you try to lay off risk, hedge against it, amortize it, and go for the steady payoff strategies that (nonetheless) once in a while blow up spectacularly and terribly. Whereas in drug research, risk is the entire point of our business (a fact that makes some of the business-trained people very uncomfortable). We fail most of the time, but once in a while have a spectacular result in a good direction. Wall Street goes short risk; we have to go long.

I've been meaning to get my talk up on YouTube or the like; and this should force me to finally get that done. Perhaps this weekend, or over the Thanksgiving break, I can put it together. I think it fits in well with what Taleb has to say.

Comments (28) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

October 11, 2012

IGFR Therapies Wipe Out. And They're Not Alone.

Email This Entry

Posted by Derek

Here's a look at something that doesn't make many headlines: the apparent failure of an entire class of potential drugs. The insulin-like growth factor 1 receptor (IGF-1R) has been targeted for years now, from a number of different angles. There have been several antibodies tried against it, and companies have also tried small molecule approaches such as inhibiting the associated receptor kinase. (I was on such a project myself a few years back). So far, nothing has worked out.

And as that review shows, this was a very reasonable-sounding idea. Other growth factor receptors have been successful cancer targets (notably EGFR), and there was evidence of IGFR over-expression in several widespread cancer types (and evidence from mouse models that inhibiting it would have the desired effect). The rationale here was as solid as anything we have, but reality has had other ideas:

It is hardly surprising that even some of the field's pioneers are now pessimistic. “In the case of IGF-1R, one can protest that proper studies have not yet been carried out,” writes Renato Baserga, from the department of Cancer Biology, Thomas Jefferson University in Philadelphia. (J. Cell. Physiol., doi:10.1002/jcp.24217). A pioneer in IGF-1 research, Baserga goes on to list some avenues that may still be promising, such as targeting the receptor to prevent metastases in colorectal cancer patients. But in the end, he surmises: “These excuses are poor excuses, [they are] an attempt to reinvigorate a procedure that has failed.” Saltz agrees. “This may be the end of the story,” he says. “At one point, there were more than ten companies developing these drugs; now this may be the last one that gets put on the shelf.”

But, except for articles like these in journals like Nature Biotechnology, or mentions on web sites like this one, no one really hears about this sort of thing. We've talked about this phenomenon before; there's a substantial list of drug targets that looked very promising, got a lot of attention for years, but never delivered any sort of drug at all. Negative results don't make for much of a headline in the popular press, especially when the story develops over a multi-year period.

I think it would be worthwhile for people to hear about this, though. I once talked with someone who was quite anxious about an upcoming plane trip; they were worried on safety grounds. It occurred to me that if there were a small speaker on this person's desk announcing all the flights that had landed safely around the country (or around the world), that a few days of that might actually have an effect. Hundreds, thousands of announcements, over and over: "Flight XXX has landed safely in Omaha. Flight YYY has landed safely in Seoul. Flight ZZZ has landed safely in Amsterdam. . ." Such a speaker system wouldn't shut up for long suring any given day, that's for sure, and it would emphasize the sheer volume of successful air travel that takes place each day, over and over.

On the other hand, almost all drug research programs, or never even make it off the ground in the first place. In this field, actually getting a plane together, getting it into the air, and guiding it to a landing at the FDA only happens once in a rather long while, which is why there are plenty of people out there in early research who've never worked on anything that's made it to market. A list of all the programs that failed would be instructive, and might get across how difficult finding a drug really is, but no one's going to be able to put one of those together. Companies don't even announce the vast majority of their preclinical failures; they're below everyone else's limit of detection. I can tell you for sure that most of the non-delivering programs I've worked on have never seen daylight of any sort. They just quietly disappeared.

Comments (11) + TrackBacks (0) | Category: Cancer | Drug Development | Drug Industry History

September 28, 2012

Pfizer's New Leaf

Email This Entry

Posted by Derek

Here's a piece by an industry consultant who's interacted with Pfizer a lot over the years. He says that they're really, truly going to change:

But buying companies, partners, and products never added up to a net gain in R&D productivity because the resulting behemoth lacked the key ingredient: integration. Like the industry in general, Pfizer’s acquisitions bought it little else but time. When its enormous R&D engine broke down after failing to produce an adequate pipeline, the company reflexively slashed research spending and staff. But something else happened along the way — a sea change for the company not only in organization but also in philosophy. Like China or the former Soviet Union renouncing past Maoist or Stalinist practices, Pfizer has now declared an end to its legendary imperialism in favor of a new, open and collaborative research model.

Let's just say, that as with many large companies, "open" and "collaborative" have not necessarily been the first words one associates with Pfizer's research strategy. My initial impulse is to discount this stuff as they-have-to-say-that pronouncements from the executive suite. But I'm a cynical person sometimes. If Pfizer really is going to change, theway to convince people (such as their potential collaborators) will be through deeds rather than words. We'll see.

Comments (34) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

August 31, 2012

Eli Lilly's Drumbeat of Bad News

Email This Entry

Posted by Derek

Eli Lilly has been getting shelled with bad news recently. There was the not-that-encouraging-at-all failure of its Alzheimer's antibody solanezumab to meet any of its clinical endpoints. But that's the good news, since that (at least according to the company) it showed some signs of something in some patients.

We can't say that about pomaglumetad methionil (LY2140023), their metabotropic glutamate receptor ligand for schizophrenia, which is being halted. The first large trial of the compound failed to meet its endpoint, and an interim analysis showed that the drug was unlikely to have a chance of making its endpoints in the second trial. It will now disappear, as will the money spent on it so far. (The first drug project I ever worked on was a backup for an antipsychotic with a novel mechanism, which also failed to do a damned thing in the clinic, and which experience perhaps gave me some of the ideas I have now about drug research).

This compound is an oral prodrug of LY404039, which has a rather unusual structure. The New York Times did a story about the drug's development a few years ago, which honestly makes rather sad reading in light of the current news. It was once thought to have great promise. Note the cynical statement in that last link about how it really doesn't matter if the compound works or not - but you know what? It did matter in the end. This was the first compound of its type, an attempt at a real innovation through a new mechanism to treat mental illness, just the sort of thing that some people will tell you that the drug industry never gets around to doing.

And just to round things off, Lilly announced the results of a head-to-head trial of its anticoagulant drug Effient versus (now generic) Plavix in acute coronary syndrome. This is the sort of trial that critics of the drug industry keep saying never gets run, by the way. But this one was, because Plavix is the thing to beat in that field - and Effient didn't beat it, although there might have been an edge in long-term followup.

Anticoagulants are a tough field - there are a lot of patients, a lot of money to be made, and a lot of room (in theory) for improvement over the existing agents. But just beating heparin is hard enough, without the additional challenge of beating cheap Plavix. It's a large enough patient population, though, that more than one drug is needed because of different responses.

There have been a lot of critics of Lilly's research strategy over the years, and a lot of shareholders have been (and are) yelling for the CEO's head. But from where I sit, it looks like the company has been taking a lot of good shots. They've had a big push in Alzheimer's, for example. Their gamma-secretase inhibitor, which failed in terrible fashion, was a first of its kind. Someone had to be the first to try this mechanism out; it's been a goal of Alzheimer's research for over twenty years now. Solanezumab was a tougher call, given the difficulties that Elan (and Wyeth/Pfizer, J&J, and so on) have had with that approach over the years. But immunology is a black box, different antibodies do different things in different people, and Lilly's not the only company trying the same thing. And they've been doggedly pursuing beta-secretase as well. These, like them or not, are still some of the best ideas that anyone has for Alzheimer's therapy. And any kind of win in that area would be a huge event - I think that Lilly deserves credit for having the nerve to go after such a tough area, because I can tell you that I've been avoiding it ever since I worked on it in the 1990s.

But what would I have spent the money on instead? It's not like there are any low-risk ideas crowding each other for attention. Lilly's portfolio is not a crazy or stupid one - it's not all wild ideas, but it's not all full of attempts to play it safe, either. It looks like the sort of thing any big (and highly competent) drug research organization could have ended up with. The odds are still very much against any drug making it through the clinic, which means that having three (or four, or five) in a row go bad on you is not an unusual event at all. Just a horribly unprofitable one.

Comments (26) + TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Drug Development | Drug Industry History | The Central Nervous System

August 29, 2012

How Did the Big Deals of 2007 Work Out?

Email This Entry

Posted by Derek

Startup biopharma companies: they've gotta raise money, right? And the more money, the better, right? Not so right, according to this post by venture capitalist Bruce Booth. Companies need money, for sure, but above a certain threshold there's no correlation with success, either for the company's research portfolio or its early stage investors. (I might add that the same holds true for larger drug companies as well, for somewhat different reasons. Perhaps Pfizer's strategy over the last twenty years has had one (and maybe only one) net positive effect: it's proven that you cannot humungous your way to success in this business. And yes, since you ask, that's the last time I plan to use "humungous" as a verb for a while).

There's also a fascinating look back at FierceBiotech's 2007 "Top Deals", to see what became of the ten largest financing rounds on the list. Some of them have worked out, and some of them most definitely haven't: 4 of the ten were near-total losses. One's around break-even, two are "works in progress" but could come through, and three have provided at least 2x returns. (Read his post to attach names to these!) And as Booth shows, that's pretty much what you'd expect from the distribution over the entire biotech industry, including all the wild-eyed stuff and the riskiest small fry. Going with the biggest, most lucratively financed companies bought you, in this case, no extra security at all.

A note about those returns: one of the winners on the list is described as having paid out "modest 2x returns" to the investors. That's the sort of quote that inspires outrage among the clueless, because (of course) a 100% profit is rather above the market returns for the last five years. But the risk/reward ratio has not been repealed. You could have gotten those market returns by doing nothing, just by parking the cash in a couple of index funds and sitting back. Investing in startup companies requires a lot more work, because you're taking on a lot more risk.

It was not clear which of those ten big deals in 2007 would pay out, to put it mildly. In fact, if you take Booth's figures so far, an equal investment in each of the top seven companies on the list in 2007 would leave you looking at a slight net loss to date, and that includes one company that would have paid you back at about 3x to 4x. Number eight was the big winner on the list (5x, if you got out at the perfect peak, and good luck with that), and number 9 is the 2x return (while #10 is ongoing, but a likely loss). As any venture investor knows, you're looking at a significant risk of losing your entire investment whenever you back a startup, so you'd better (a) back more than one and (b) do an awful lot of thinking about which ones those are. This is a job for the deeply pocketed.

And when you think about it, a very similar situation obtains inside a given drug company. The big difference is that you don't have the option of not playing the game - something always has to be done. There are always projects going, some of which look more promising than others, some of which will cost more to prosecute than others, and some of which are aimed at different markets than others. You might be in a situation where there are several that look like they could be taken on, but your development organization can't handle so many. What to do? Partner something, park something that can wait (if anything can)?Or you might have the reverse problem, of not enough programs that look like they might work. Do you push the best of a bad lot forward and hope for the best? If not, do you still pay your development people even if they have nothing to develop right now, in the hopes that they soon will?

Which of these clinical programs of yours have the most risk? The biggest potential? Have you balanced those properly? You're sure to lose your entire investment on the majority - the great majority - of them, so choose as wisely as you can. The ones that make it through are going to have to pay for all the others, because if they don't, everyone's out of a job.

This whole process, of accumulating capital and risking it on new ventures, is important enough that we've named an entire economic system for it. It's a high-wire act. Too cautious, and you might not keep up enough to survive. Too risky, and you could lose too much. They do focus one's attention, such prospects, and the thought that other companies are out there trying to get a step on you helps keep you moving, too. It's not a pretty system, but it isn't supposed to be. It's supposed to work.

Comments (1) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

August 27, 2012

Chemistry's Mute Black Swans

Email This Entry

Posted by Derek

What's a Black Swan Event in chemistry? Longtime industrial chemist Bill Nugent has a very interesting article in Angewandte Chemie with that theme, and it's well worth a look. He details several examples of things that all organic chemists thought they knew that turned out not to be so, and traces the counterexamples back to their first appearances in the literature. For example, the idea that gold (and gold complexes) were uninteresting catalysts:

I completed my graduate studies with Prof. Jay Kochi at Indiana University in 1976. Although research for my thesis focused on organomercury chemistry, there was an active program on organogold chemistry, and our perspective was typical for its time. Gold was regarded as a lethargic and overweight version of catalytically interesting copper. More- over, in the presence of water, gold(I) complexes have a nasty tendency to disproportionate to gold(III) and colloidal gold(0). Gold, it was thought, could provide insight into the workings of copper catalysis but was simply too inert to serve as a useful catalyst itself. Yet, during the decade after I completed my Ph.D. in 1976 there were tantalizing hints in the literature that this was not the case.

gold%20chart.png
One of these was a high-temperature rearrangement reported in 1976, and there was a 1983 report on gold-catalyzed oxidation of sulfides to sulfoxides. Neither of these got much attention, as the Nugent's own chart of the literature on the subject shows. (I don't pay much attention when someone oxidizes a sulfide, myself). Apparently, though, a few people had reason to know that something was going on:

However, analytical chemists in the gold-mining industry have long harnessed the ability of gold to catalyze the oxidation of certain organic dyes as a means of assaying ore samples. At least one of these reports actually predates the (1983) Natile publication. Significantly, it could be shown that other precious metals do not catalyze the same reactions, the assays are specific for gold. It is safe to say that the synthetic community was not familiar with this report.

I'll bet not. It wasn't until 1998 that a paper appeared that really got people interested, and you can see the effect on that chart. Nugent has a number of other similar examples of chemistry that appeared years before its potential was recognized. Pd-catalyzed C-N bond formation, monodentate asymmetric hydrogenation catalysts, the use of olefin metathesis in organic synthesis, non-aqueous enzyme chemistry, and many others.

So where do the black swans come into all this? Those familiar with Nasim Taleb's book
will recognize the reference.

The phrase “Black Swan event” comes from the writings of the statistician and philosopher Nassim Nicholas Taleb. The term derives from a Latin metaphor that for many centuries simply meant something that does not exist. But also implicit in the phrase is the vulnerability of any system of thought to conflicting data. The phrase's underlying logic could be undone by the observation of a single black swan.

In 1697, the Dutch explorer Willem de Vlamingh discovered black swans on the Swan River in Western Australia. Not surprisingly, the phrase underwent a metamorphosis and came to mean a perceived impossibility that might later be disproven. It is in this sense that Taleb employs it. In his view: “What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct an explanation for its occurrence after the fact, making it explainable and predictable.”

Taleb has documented this last point about human nature through historical and psychological evidence. His ideas remain controversial but seem to make a great deal of sense when one attempts to understand the lengthy interludes between the literature antecedents and the disruptive breakthroughs shown. . .At the very least, his ideas represent a heads up as to how we read and mentally process the chemical literature.

I have no doubt that unwarranted assumptions persist in the conventional wisdom of organic synthesis. (Indeed, to believe otherwise would suggest that disruptive break- throughs will no longer occur in the future.) The goal, it would seem, is to recognize such assumptions for what they are and to minimize the time lag between the appearance of Black Swans and the breakthroughs that follow.

One difference between Nugent's examples and Taleb's is the "extreme impact" part. I think that Taleb has in mind events in the financial industry like the real estate collapse of 2007-2008 (recommended reading here
), or the currency events that led to the wipeout of Long-Term Capital Management in 1998. The scientific literature works differently. As this paper shows, big events in organic chemistry don't come on as sudden, unexpected waves that sweep everything before them. Our swans are mute. They slip into the water so quietly that no one notices them for years, and they're often small enough that people mistake them for some other bird entirely. Thus the time lag.

How to shorten that? It'll be hard, because a lot of the dark-colored birds you see in the scientific literature aren't amazing black swans; they're crows and grackles. (And closer inspection shows that some of them are engaged in such unusual swan-like behavior because they're floating inertly on their sides). The sheer size of the literature now is another problem - interesting outliers are carried along in a flood tide of stuff that's not quite so interesting. (This paper mentions that very problem, along with a recommendation to still try to browse the literature - rather than only doing targeted searches - because otherwise you'll never see any oddities at all).

Then there's the way that we deal with such things even when we do encounter them. Nugent's recommendation is to think hard about whether you really know as much as you think you do when you try to rationalize away some odd report. (And rationalizing them away is the usual reponse). The conventional wisdom may not be as solid as it appears; you can probably put your foot through it in numerous places with a well-aimed kick. As the paper puts it: "Ultimately, the fact that something has never been done is the flimsiest of evidence that it cannot be done."

That's worth thinking about in terms of medicinal chemistry, as well as organic synthesis. Look, for example, at Rule-Of-Five type criteria. We've had a lot of discussions about these around here (those links are just some of the more recent ones), and I'll freely admit that I've been more in the camp that says "Time and money are fleeting, bias your work towards friendly chemical space". But it's for sure that there are compounds that break all kinds of rules and still work. Maybe more time and money should go into figuring out what it is about those drugs, and whether there are any general lessons we can learn about how to break the rules wisely. It's not that work in this area hasn't been done, but we still have a poor understanding of what's going on.

Comments (16) + TrackBacks (0) | Category: Chemical News | Drug Industry History | The Scientific Literature | Who Discovers and Why

August 21, 2012

Four Billion Compounds At a Time

Email This Entry

Posted by Derek

This paper from GlaxoSmithKline uses a technology that I find very interesting, but it's one that I still have many questions about. It's applied in this case to ADAMTS-5, a metalloprotease enzyme, but I'm not going to talk about the target at all, but rather, the techniques used to screen it. The paper's acronym for it is ELT, Encoded Library Technology, but that "E" could just as well stand for "Enormous".

That's because they screened a four billion member library against the enzyme. That is many times the number of discrete chemical species that have been described in the entire scientific literature, in case you're wondering. This is done, as some of you may have already guessed, by DNA encoding. There's really no other way; no one has a multibillion-member library formatted in screening plates and ready to go.

So what's DNA encoding? What you do, roughly, is produce a combinatorial diversity set of compounds while they're attached to a length of DNA. Each synthetic step along the way is marked by adding another DNA sequence to the tag, so (in theory) every compound in the collection ends up with a unique oligonucleotide "bar code" attached to it. You screen this collection, narrow down on which compound (or compounds) are hits, and then use PCR and sequencing to figure out what their structures must have been.

As you can see, the only way this can work is through the magic of molecular biology. There are so many enzymatic methods for manipulating DNA sequences, and they work so well compared with standard organic chemistry, that ridiculously small amounts of DNA can be detected, amplified, sequenced, and worked with. And that's what lets you make a billion member library; none of the components can be present in very much quantity (!)
DNA%20triazine.png
This particular library comes off of a 1,3,5-triazine, which is not exactly the most cutting-edge chemical scaffold out there (I well recall people making collections of such things back in about 1992). But here's where one of the big questions comes up: what if you have four billion of the things? What sort of low hit rate can you not overcome by that kind of brute force? My thought whenever I see these gigantic encoded libraries is that the whole field might as well be called "Return of Combichem: This Time It Works", and that's what I'd like to know: does it?

There are other questions. I've always wondered about the behavior of these tagged molecules in screening assays, since I picture the organic molecule itself as about the size of a window air conditioner poking out from the side of a two-story house of DNA. It seems strange to me that these beasts can interact with protein targets in ways that can be reliably reproduced once the huge wad of DNA is no longer present, but I've been assured by several people that this is indeed the case.

In this example, two particular lineages of compounds stood out as hits, which makes you much happier than a collection of random singletons. When the team prepared a selection of these as off-DNA "real organic compounds", many of them were indeed nanomolar hits, although a few dropped out. Interestingly, none of the compounds had the sorts of zinc-binding groups that you'd expect against the metalloprotease target. The rest of the paper is a more traditional SAR exploration of these, leading to what one has to infer are more tool/target validation compounds rather than drug candidates per se.

I know that GSK has been doing this sort of thing for a while, and from the looks of it, this work itself was done a while ago. For one thing, it's in J. Med. Chem., which is not where anything hot off the lab bench appears. For another, several of the authors of the paper appear with "Present Address" footnotes, so there has been time for a number of people on this project to have moved on completely. And that brings up the last set of questions, for now: has this been a worthwhile effort for GSK? Are they still doing it? Are we just seeing the tip of a large and interesting iceberg, or are we seeing the best that they've been able to do? That's the drug industry for you; you never know how many cards have been turned over, or why.

Comments (24) + TrackBacks (0) | Category: Chemical Biology | Chemical News | Drug Assays | Drug Industry History

August 9, 2012

Getting Drug Research Really, Really Wrong

Email This Entry

Posted by Derek

The British Medical Journal says that the "widely touted innovation crisis in pharmaceuticals is a myth". The British Medical Journal is wrong.

There, that's about as direct as I can make it. But allow me to go into more detail, because that's not the the only thing they're wrong about. This is a new article entitled "Pharmaceutical research and development: what do we get for all that money?", and it's by Joel Lexchin (York University) and Donald Light of UMDNJ. And that last name should be enough to tell you where this is all coming from, because Prof. Light is the man who's publicly attached his name to an estimate that developing a new drug costs about $43 million dollars.

I'm generally careful, when I bring up that figure around people who actually develop drugs, not to do so when they're in the middle of drinking coffee or working with anything fragile, because it always provokes startled expressions and sudden laughter. These posts go into some detail about how ludicrous that number is, but for now, I'll just note that it's hard to see how anyone who seriously advances that estimate can be taken seriously. But here we are again.

Light and Lexchin's article makes much of Bernard Munos' work (which we talked about here), which shows a relatively constant rate of new drug discovery. They should go back and look at his graph, because they might notice that the slope of the line in recent years has not kept up with the historical rate. And they completely leave out one of the other key points that Munos makes: that even if the rate of discovery were to have remained linear, the costs associated with it sure as hell haven't. No, it's all a conspiracy:

"Meanwhile, telling "innovation crisis" stories to politicians and the press serves as a ploy, a strategy to attract a range of government protections from free market, generic competition."

Ah, that must be why the industry has laid off thousands and thousands of people over the last few years: it's all a ploy to gain sympathy. We tell everyone else how hard it is to discover drugs, but when we're sure that there are no reporters or politicians around, we high-five each other at how successful our deception has been. Because that's our secret, according to Light and Lexchin. It's apparently not any harder to find something new and worthwhile, but we'd rather just sit on our rears and crank out "me-too" medications for the big bucks:

"This is the real innovation crisis: pharmaceutical research and development turns out mostly minor variations on existing drugs, and most new drugs are not superior on clinical measures. Although a steady stream of significantly superior drugs enlarges the medicine chest from which millions benefit, medicines have also produced an epidemic of serious adverse reactions that have added to national healthcare costs".

So let me get this straight: according to these folks, we mostly just make "minor variations", but the few really new drugs that come out aren't so great either, because of their "epidemic" of serious side effects. Let me advance an alternate set of explanations, one that I call, for lack of a better word, "reality". For one thing, "me-too" drugs are not identical, and their benefits are often overlooked by people who do not understand medicine. There are overcrowded therapeutic areas, but they're not common. The reason that some new drugs make only small advances on existing therapies is not because we like it that way, and it's especially not because we planned it that way. This happens because we try to make big advances, and we fail. Then we take what we can get.

No therapeutic area illustrates this better than oncology. Every new target in that field has come in with high hopes that this time we'll have something that really does the job. Angiogenesis inhibitors. Kinase inhibitors. Cell cycle disruptors. Microtubules, proteosomes, apoptosis, DNA repair, metabolic disruption of the Warburg effect. It goes on and on and on, and you know what? None of them work as well as we want them to. We take them into the clinic, give them to terrified people who have little hope left, and we watch as we provide with them, what? A few months of extra life? Was that what we were shooting for all along, do we grin and shake each others' hands when the results come in? "Another incremental advance! Rock and roll!"

Of course not. We're disappointed, and we're pissed off. But we don't know enough about cancer (yet) to do better, and cancer turns out to be a very hard condition to treat. It should also be noted that the financial incentives are there to discover something that really does pull people back from the edge of the grave, so you'd think that we money-grubbing, public-deceiving, expense-padding mercenaries might be attracted by that prospect. Apparently not.

The same goes for Alzheimer's disease. Just how much money has the industry spent over the last quarter of a century on Alzheimer's? I worked on it twenty years ago, and God knows that never came to anything. Look at the steady march, march, march of failure in the clinic - and keep in mind that these failures tend to come late in the game, during Phase III, and if you suggest to anyone in the business that you can run an Alzheimer's Phase III program and bring the whole thing in for $43 million dollars, you'll be invited to stop wasting everyone's time. Bapineuzumab's trials have surely cost several times that, and Pfizer/J&J are still pressing on. And before that you had Elan working on active immunization, which is still going on, and you have Lilly's other antibody, which is still going on, and Genentech's (which is still going on). No one has high hopes for any of these, but we're still burning piles of money to try to find something. And what about the secretase inhibitors? How much time and effort has gone into beta- and gamma-secretase? What did the folks at Lilly think when they took their inhibitor way into Phase III only to find out that it made Alzheimer's slightly worse instead of helping anyone? Didn't they realize that Professors Light and Lexchin were on to them? That they'd seen through the veil and figured out the real strategy of making tiny improvements on the existing drugs that attack the causes of Alzheimer's? What existing drugs to target the causes of Alzheimer are they talking about?

Honestly, I have trouble writing about this sort of thing, because I get too furious to be coherent. I've been doing this sort of work since 1989, and I have spent the great majority of my time working on diseases for which no good therapies existed. The rest of the time has been spent on new mechanisms, new classes of drugs that should (or should have) worked differently than the existing therapies. I cannot recall a time when I have worked on a real "me-too" drug of the sort of that Light and Lexchin seem to think the industry spends all its time on.

That's because of yet another factor they have not considered: simultaneous development. Take a look at that paragraph above, where I mentioned all those Alzheimer's therapies. Let's be wildly, crazily optimistic and pretend that bapineuzumab manages to eke out some sort of efficacy against Alzheimer's (which, by the way, would put it right into that "no real medical advance" category that Light and Lexchin make so much of). And let's throw caution out the third-floor window and pretend that Lilly's solanezumab actually does something, too. Not much - there's a limit to how optimistic a person can be without pharmacological assistance - but something, some actual efficacy. Now here's what you have to remember: according to people like the authors of this article, whichever of these antibodies that makes it though second is a "me-too" drug that offers only an incremental advance, if anything. Even though all this Alzheimer's work was started on a risk basis, in several different companies, with different antibodies developed in different ways, with no clue as to who (if anyone) might come out on top.

All right, now we get to another topic that articles like this latest one are simply not complete without. That's right, say it together: "Drug companies spend a lot more on marketing than they do on research!" Let's ignore, for the sake of argument, the large number of smaller companies that spend all of their money on R&D and none on marketing, because they have nothing to market yet. Let's even ignore the fact that over the years, the percentage of money being spent on drug R&D has actually been going up. No, let's instead go over this in a way that even professors at UMDNJ and York can understand it:

Company X spends, let's say, $10 a year on research. (We're lopping off a lot of zeros to make this easier). It has no revenues from selling drugs yet, and is burning through its cash while it tries to get its first on onto the market. It succeeds, and the new drug will bring in $100 dollars a year for the first two or three years, before the competition catches up with some of the incremental me-toos that everyone will switch to for mysterious reasons that apparently have nothing to do with anything working better. But I digress; let's get back to the key point. That $100 a year figure assumes that the company spends $30 a year on marketing (advertising, promotion, patient awareness, brand-building, all that stuff). If the company does not spend all that time and effort, the new drug will only bring in $60 a year, but that's pure profit. (We're going to ignore all the other costs, assuming that they're the same between the two cases).

So the company can bring in $60 dollars a year by doing no promotion, or it can bring in $70 a year after accounting for the expenses of marketing. The company will, of course, choose the latter. "But," you're saying, "what if all that marketing expense doesn't raise sales from $60 up to $100 a year?" Ah, then you are doing it wrong. The whole point, the raison d'etre of the marketing department is to bring in more money than they are spending. Marketing deals with the profitable side of the business; their job is to maximize those profits. If they spend more than those extra profits, well, it's time to fire them, isn't it?

R&D, on the other hand, is not the profitable side of the business. Far from it. We are black holes of finance: huge sums of money spiral in beyond our event horizons, emitting piteous cries and futile streams of braking radiation, and are never seen again. The point is, these are totally different parts of the company, doing totally different things. Complaining that the marketing budget is bigger than the R&D budget is like complaining that a car's passenger compartment is bigger than its gas tank, or that a ship's sail is bigger than its rudder.

OK, I've spend about enough time on this for one morning; I feel like I need a shower. Let's get on to the part where Light and Lexchin recommend what we should all be doing instead:

What can be done to change the business model of the pharmaceutical industry to focus on more cost effective, safer medicines? The first step should be to stop approving so many new drugs of little therapeutic value. . .We should also fully fund the EMA and other regulatory agencies with public funds, rather than relying on industry generated user fees, to end industry’s capture of its regulator. Finally, we should consider new ways of rewarding innovation directly, such as through the large cash prizes envisioned in US Senate Bill 1137, rather than through the high prices generated by patent protection. The bill proposes the collection of several billion dollars a year from all federal and non-federal health reimbursement and insurance programmes, and a committee would award prizes in proportion to how well new drugs fulfilled unmet clinical needs and constituted real therapeutic gains. Without patents new drugs are immediately open to generic competition, lowering prices, while at the same time innovators are rewarded quickly to innovate again. This approach would save countries billions in healthcare costs and produce real gains in people’s health.

One problem I have with this is that the health insurance industry would probably object to having "several billion dollars a year" collected from it. And that "several" would not mean "two or three", for sure. But even if we extract that cash somehow - an extraction that would surely raise health insurance costs as it got passed along - we now find ourselves depending on a committee that will determine the worth of each new drug. Will these people determine that when the drug is approved, or will they need to wait a few years to see how it does in the real world? If the drug under- or overperforms, does the reward get adjusted accordingly? How, exactly, do we decide how much a diabetes drug is worth compared to one for multiple sclerosis, or TB? What about a drug that doesn't help many people, but helps them tremendously, versus a drug that's taken by a lot of people, but has only milder improvements for them? What if a drug is worth a lot more to people in one demographic versus another? And what happens as various advocacy groups lobby to get their diseases moved further up the list of important ones that deserve higher prizes and more incentives?

These will have to be some very, very wise and prudent people on this committee. You certainly wouldn't want anyone who's ever been involved with the drug industry on there, no indeed. And you wouldn't want any politicians - why, they might use that influential position to do who knows what. No, you'd want honest, intelligent, reliable people, who know a tremendous amount about medical care and pharmaceuticals, but have no financial or personal interests involved. I'm sure there are plenty of them out there, somewhere. And when we find them, why stop with drugs? Why not set up committees to determine the true worth of the other vital things that people in this country need each day - food, transportation, consumer goods? Surely this model can be extended; it all sounds so rational. I doubt if anything like it has ever been tried before, and it's certainly a lot better than the grubby business of deciding prices and values based on what people will pay for things (what do they know, anyway, compared to a panel of dispassionate experts?)

Enough. I should mention that when Prof. Light's earlier figure for drug expense came out that I had a brief correspondence with him, and I invited him to come to this site and try out his reasoning on people who develop drugs for a living. Communication seemed to dry up after that, I have to report. But that offer is still open. Reading his publications makes me think that he (and his co-authors) have never actually spoken with anyone who does this work or has any actual experience with it. Come on down, I say! We're real people, just like you. OK, we're more evil, fine. But otherwise. . .

Comments (74) + TrackBacks (0) | Category: "Me Too" Drugs | Business and Markets | Cancer | Drug Development | Drug Industry History | Drug Prices | The Central Nervous System | Why Everyone Loves Us

June 4, 2012

By Any Other Name

Email This Entry

Posted by Derek

Over at Xconomy, Luke Timmerman asks why any biopharma company would go to the trouble and expense of changing its name. There are several reasons (such as having chosen a lousy name to begin with), but he's right that most company names don't mean much before or after a change.

He also has a poll of some name changes, asking if they were upgrades or not. The first on his list is my nomination for the Worst Company Name: AbbVie, which is what Abbott decided to call its pharma business as it spins that out on its own. I just can't say enough bad things about that one -it's meaningless, for starters, and that double "b" looks like a misprint. The b/v consonant combination doesn't exactly roll off the tongue; the "Vie" is silly for a company not based in France (or at least selling something that's supposed to be French), and I've never been a fan of InterCapitalization. Other than that, I guess it's fine.

So here's a quick question: what's the biotech/pharma company out there with the worst name - well, other than AbbVie? Is there anyone who can beat them? Boring doesn't count. We're looking for actually harmful. Nominees?

Comments (66) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

May 22, 2012

The NIH's Drug Repurposing Initiative: Will It Be a Waste?

Email This Entry

Posted by Derek

The NIH's attempt to repurpose shelved development compounds and other older drugs is underway:

The National Institutes of Health (NIH) today announced a new plan for boosting drug development: It has reached a deal with three major pharmaceutical companies to share abandoned experimental drugs with academic researchers so they can look for new uses. NIH is putting up $20 million for grants to study the drugs.

"The goal is simple: to see whether we can teach old drugs new tricks," said Health and Human Services Secretary Kathleen Sebelius at a press conference today that included officials from Pfizer, AstraZeneca, and Eli Lilly. These companies will give researchers access to two dozen compounds that passed through safety studies but didn't make it beyond mid-stage clinical trials. They shelved the drugs either because they didn't work well enough on the disease for which they were developed or because a business decision sidelined them.

There are plenty more where those came from, and I certainly wish people luck finding uses for them. But I've no idea what the chances for success might be. On the one hand, having a compound that's passed all the preclinical stages of development and has then been into humans is no small thing. On that ever-present other hand, though, randomly throwing these compounds against unrelated diseases is unlikely to give you anything (there aren't enough of them to do that). My best guess is that they have a shot in closely related disease fields - but then again, testing widely might show us that there are diseases that we didn't realized were related to each other.

John LaMattina is skeptical:

Well, the NIH has recently expanded the remit of NCATS. NCATS will now be testing drugs that have been shelved by the pharmaceutical industry for other potential uses. The motivation for this is simple. They believe that these once promising but failed compounds could have other uses that the inventor companies haven’t yet identified. I’d like to reiterate the view of Dr. Vagelos – it’s fairy time again.

My views on this sort of initiative, which goes by a variety of names – “drug repurposing,” “drug repositioning,” “reusable drugs” – have been previously discussed in my blog. I do hope that people can have success in this type of work. But I believe successes are going to be rare.

The big question is, rare enough to count the money and time as wasted, or not? I guess we'll find out. Overall, I'd rather start with a compound that I know does what I want it to do, and then try to turn it into a drug (phenotypic screening). Starting with a compound that you know is a drug, but doesn't necessarily do what you want it to, is going to be tricky.

Comments (33) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development | Drug Industry History

The Counting of Beans

Email This Entry

Posted by Derek

This article from the Telegraph has nothing to say at all about the drug industry. But you might find it strangely familiar and appropriate, starting with the headline: Bloodless Bean Counters Rule Over Us:

You find this hollowing-out everywhere. In schools, the head who does not teach is now a familiar, indeed dominant figure. University vice-chancellors, instead of being dons who move from their subject into administration for a period of their lives, are now virtually lifelong managers, with hugely increased salaries to match. It is even commonplace for charities to be run by people with no commitment to the charity’s specific purpose, but proud possession of what they call the necessary “skill-sets”, such as corporate governance. . .

. . .These habits are now pervasive across industry and the public services. “Diversity” is always “celebrated”, but it never means diversity of thought. The people who tell you they are “passionate about” X or Y are usually the most bloodless ones in the outfit.

In such cultures, just as the experts, the professionals and the technicians bitterly resent the managerialists for neither understanding nor caring, so the managerialists secretly detest the professionals who, they believe, get in the way of their rationalisations. They are desperate to “let go” of such people. Very unhappy organisations result.

Or then again, perhaps you haven't encountered anything like this after a few years in the industry. What, after all, are the odds?

Comments (8) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

May 3, 2012

The Biotech Class of the Early 90s

Email This Entry

Posted by Derek

Here's an excellent piece by venture capital guy Bruce Booth, looking back at the heady days of 1991-1994. I can tell you that they weren't so heady in Big Pharma, but there were a lot of startups coming along. Included are some really big names of today, but also a lot of outfits that no one even remembers any more. And how have investors fared? That depends:

Only a subset of the 1991-1994 IPO window have accrued real value over time. There were certainly a few big winners in there – Gilead probably being the biggest, up over 100x since its IPO in 1992. MedImmune also fared quite well with its $16B acquisition (though AZ is not thrilled about it now), and Vertex is up 10x.

But let’s take the prior two examples, Isis and Amylin, which represent “successful” 20-year old mid-cap biotechs. Both have gone from preclinical stage companies around their IPOs to having products launched or filed with the FDA. But they haven’t really created any shareholder value over 20 years. Isis today trades at $8 per share, but it went public at $10 per share. Amylin went out at $14, but closed on the end of its first day of trading in 1992 at $21 per share. It now trades at $25. So for 20 years, these companies (and many, many others in the 1991-1994 cohort) have underperformed not only all major equity indices, but also treasury bills, and consumed billions in equity capital. And recall that many more companies from this window, probably at least half, ended up dying long whimpering deaths like long-forgotten Autoimmune Inc and Alpha-Beta Technology.

And that's a big reason why you don't see so many big biotech/small pharma IPOs any more. The markets are a different place, twenty years on:

The current reality, shaped by a couple decades of lackluster performance, is that the public markets aren’t open for business in biotech. While they are much less tolerant of the value-destroying tactics of the past (which is a good thing), they have also set the bar so high as to discourage even great, innovative companies from considering it as a viable option. In this new world, the old company building models just don’t work: it’s hard to back a startup today with an investment thesis around “we’re building the next Gilead” – the capital markets are just so different.

Small companies have to act differently, raise money differently, and sell themselves differently these days. Stay private, do as much virtually/outsourced, sell out to Big Pharma earlier than before. . .it's worth another post or two to talk about some of those models, but the "Let's Have an IPO!" one isn't going to be on the list. Not for some time to come, anyway.

Comments (10) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

A Long-Delayed COX2 Issue Gets Settled - For $450 Million?

Email This Entry

Posted by Derek

Has the last shot been fired, very quietly, in the COX-2 discovery wars? Here's the background, in which some readers of this site have probably participated at various times. Once it was worked out that the nonsteroidal antiinflammatory drugs (aspirin, ibuprofen et al.) were inhibitors of the enzyme cyclooxygenase, it began to seem likely that there were other forms of the enzyme as well. But for a while, no one could put their hands on one. That changed in the early 1990s, when Harvey Herschman at UCLA reported the mouse COX2 gene. The human analog was discovered right on the heels of that one, with priority usually given to Dan Simmons of BYU, with Donald Young of the University of Rochester there at very nearly the same time.

The Rochester story is one that many readers will be familiar with. The university, famously, obtained a patent for compounds that exerted a therapeutic effect through inhibition of COX-2, without specifying what compounds those might be. They did not, in fact, have any, nor did they give any hints about what they'd look like, and this is what sank them in the end when the university lost its case against Searle (and its patent) for not fulfilling the "written description" requirement.

But there was legal action on the BYU end of things, too. Simmons and the university filed suit several years ago, saying that Simmons had entered into a contract with Monsanto in 1991 to discover COX2 inhibitors. The suit claimed that Monsanto had (wrongly) advised Simmons not to file for a patent on his discoveries, and had also reversed course, terminating the deal to concentrate on the company's internal efforts instead once it had obtained what it needed from the Simmons work.

That takes us to the tangled origin of the COX2 chemical matter. The progenitor compound is generally taken to be DuP-697, which was discovered and investigated before the COX-2 enzyme was even characterized. The compound had a strong antiinflammatory profile which was nonetheless different from the NSAIDS, which led to strong suspicions that it was indeed acting through the putative "other cyclooxygenase". And so it proved, once the enzyme was discovered, and a look at its structure versus the marketed drugs shows that it was a robust series of structures indeed.

One big difference between the BYU case and the Rochester case was the Simmons did indeed have a contract, and it was breach-of-contract that formed the basis for the suit. The legal maneuverings have been going on for several years now. But now Pfizer has issued a press release saying that they have reached "an amicable settlement on confidential terms". The only real detail given is that they're going to establish the Dan Simmons Chair at BYU in recognition of his work.

But there may be more to it than that. Pfizer has also reported taking a $450 million charge against earnings related to this whole matter, which certainly makes one think of Latin sayings, among them post hoc, ergo propter hoc and especially quid pro quo. We may not ever get the full details, since part of the deal would presumably include not releasing them. But it looks like a substantial sum has changed hands.

Comments (12) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Patents and IP

April 25, 2012

Drug Company Culture: It's Not Helping

Email This Entry

Posted by Derek

I wanted to call attention to a piece by Bruce Booth over at Forbes. He starts off from the Scannell paper in Nature Reviews Drug Discovery that we were discussing here recently, but he goes on to another factor. And it's a big one: culture.

Fundamentally, I think the bulk of the last decade’s productivity decline is attributable to a culture problem. The Big Pharma culture has been homogenized, purified, sterilized, whipped, stirred, filtered, etc and lost its ability to ferment the good stuff required to innovate. This isn’t covered in most reviews of the productivity challenge facing our industry, because its nearly impossible to quantify, but it’s well known and a huge issue.

You really should read the whole thing, but I'll mention some of his main points. One of those is "The Tyranny of the Committee". You know, nothing good can ever be decided unless there are a lot of people in the room - right? And then that decision has to move to another room full of people who give it a different working-over, with lots more PowerPoint - right? And then that decision moves up to a group of higher-level people, who look at the slides again - or summaries of them - and make a collective decision. That's how it's supposed to work - uh, right?

Another is "Stagnation Through Risk Avoidance". Projects go on longer, and keep everyone busy, if the nasty issues aren't faced too quickly. And everyone has room to deflect blame when things go wrong, if plenty of work has been poured into the project, from several different areas, before the bad news hits. Most of the time, you know, some sort of bad news is waiting out there, so you want to have yourself (and your career) prepared beforehand - right? After all, several high-level committees signed off on this project. . .

And then there's "Organizational Entropy", which we've discussed around here, too. When the New, Latest, Really-Going-to-Work reorganization hits, as it does every three years or so, things slow down. They have to. And a nice big merger doesn't just slow things down, it brings everything to a juddering halt. The cumulative effect of these things can be deadly.

As Booth says, there are other factors as well. I'd add a couple to the list, myself: the tendency to think that If This Was Any Good, Someone Else Would Be Doing It (which is another way of being able to run for cover if things don't work out), and the general human sunk-cost fallacy of We've Come This Far; We Have to Get Something Out of This. But his main point stands, and has stood for many years. The research culture in many big drug companies stands in the way of getting things done. More posts on this to follow.

Comments (36) + TrackBacks (0) | Category: Drug Industry History | Life in the Drug Labs | Who Discovers and Why

April 16, 2012

Phenotypic Screening's Comeback

Email This Entry

Posted by Derek

Here's an excellent overview of phenotypic screening at SciBx. For those outside the field, phenotypic screening is the way things used to be all the time in the drug discovery business, decades ago: (1) give compounds to a living system, and watch what happens. (2) Wait until you find a compound that does what you want, and develop that one if you can.

That's as opposed to target-based drug discovery, which began taking over in the 1970s or so, and has grown ever since as molecular biology advanced. That's where you figure out enough about a biochemical pathway to know what enzyme/receptor/etc. you should try to inhibit, and you screen against that one alone to find your leads. That has worked out very well in some cases, but not as often as people would have imagined back at the beginning.

In fact, I (and a number of other people) have been wondering if the whole molecular-biology target-based approach has been something of a dead end. A recent analysis suggested that phenotypic screens have been substantially more productive in generating first-in-class drugs, and an overemphasis on individual targets has been been suggested as a reason for the lack of productivity in drug discovery.

As that new article makes clear, though, in most cases of modern phenotypic screening, people are going back from their hit compounds and finding out how they work, when possible. That's actually an excellent platform for discoveries in biology, too, as well as for finding medicinally active compounds. I'm glad to see cell- and tissue-based assays making a comeback, and I hope that they can bail us all out a bit.

Comments (30) + TrackBacks (0) | Category: Drug Assays | Drug Industry History

April 4, 2012

The Artificial Intelligence Economy?

Email This Entry

Posted by Derek

Now here's something that might be about to remake the economy, or (on the other robotic hand) it might not be ready to just yet. And it might be able to help us out in drug R&D, or it might turn out to be mostly beside the point. What the heck am I talking about, you ask? The so-called "Artificial Intelligence Economy". As Adam Ozimek says, things are looking a little more futuristic lately.

He's talking about things like driverless cars and quadrotors, and Tyler Cowen adds the examples of things like Apple's Siri and IBM's Watson, as part of a wider point about American exports:

First, artificial intelligence and computing power are the future, or even the present, for much of manufacturing. It’s not just the robots; look at the hundreds of computers and software-driven devices embedded in a new car. Factory floors these days are nearly empty of people because software-driven machines are doing most of the work. The factory has been reinvented as a quiet place. There is now a joke that “a modern textile mill employs only a man and a dog—the man to feed the dog, and the dog to keep the man away from the machines.”

The next steps in the artificial intelligence revolution, as manifested most publicly through systems like Deep Blue, Watson and Siri, will revolutionize production in one sector after another. Computing power solves more problems each year, including manufacturing problems.

Two MIT professors have written a book called Race Against the Machine about all this, and it appears to be sort of a response to Cowen's earlier book The Great Stagnation. (Here's an article of theirs in The Atlantic making their case).

One of the export-economy factors that it (and Cowen) bring up is that automation makes a country's wages (and labor costs in general) less of a factor in exports, once you get past the capital expenditure. And as the size of that expenditure comes down, it becomes easier to make that leap. One thing that means, of course, is that less-skilled workers find it harder to fit in. Here's another Atlantic article, from the print magazine, which looked at an auto-parts manufacturer with a factory in South Carolina (the whole thing is well worth reading):

Before the rise of computer-run machines, factories needed people at every step of production, from the most routine to the most complex. The Gildemeister (machine), for example, automatically performs a series of operations that previously would have required several machines—each with its own operator. It’s relatively easy to train a newcomer to run a simple, single-step machine. Newcomers with no training could start out working the simplest and then gradually learn others. Eventually, with that on-the-job training, some workers could become higher-paid supervisors, overseeing the entire operation. This kind of knowledge could be acquired only on the job; few people went to school to learn how to work in a factory.
Today, the Gildemeisters and their ilk eliminate the need for many of those machines and, therefore, the workers who ran them. Skilled workers now are required only to do what computers can’t do (at least not yet): use their human judgment.

But as that article shows, more than half the workers in that particular factory are, in fact, rather unskilled, and they make a lot more than their Chinese counterparts do. What keeps them employed? That calculation on what it would take to replace them with a machine. The article focuses on one of those workers in particular, named Maddie:

It feels cruel to point out all the Level-2 concepts Maddie doesn’t know, although Maddie is quite open about these shortcomings. She doesn’t know the computer-programming language that runs the machines she operates; in fact, she was surprised to learn they are run by a specialized computer language. She doesn’t know trigonometry or calculus, and she’s never studied the properties of cutting tools or metals. She doesn’t know how to maintain a tolerance of 0.25 microns, or what tolerance means in this context, or what a micron is.

Tony explains that Maddie has a job for two reasons. First, when it comes to making fuel injectors, the company saves money and minimizes product damage by having both the precision and non-precision work done in the same place. Even if Mexican or Chinese workers could do Maddie’s job more cheaply, shipping fragile, half-finished parts to another country for processing would make no sense. Second, Maddie is cheaper than a machine. It would be easy to buy a robotic arm that could take injector bodies and caps from a tray and place them precisely in a laser welder. Yet Standard would have to invest about $100,000 on the arm and a conveyance machine to bring parts to the welder and send them on to the next station. As is common in factories, Standard invests only in machinery that will earn back its cost within two years. For Tony, it’s simple: Maddie makes less in two years than the machine would cost, so her job is safe—for now. If the robotic machines become a little cheaper, or if demand for fuel injectors goes up and Standard starts running three shifts, then investing in those robots might make sense.

At this point, some similarities to the drug discovery business will be occurring to readers of this blog, along with some differences. The automation angle isn't as important, or not yet. While pharma most definitely has a manufacturing component (and how), the research end of the business doesn't resemble it very much, despite numerous attempts by earnest consultants and managers to make it so. From an auto-parts standpoint, there's little or no standardization at all in drug R&D. Every new drug is like a completely new part that no one's ever built before; we're not turning out fuel injectors or alternators. Everyone knows how a car works. Making a fundamental change in that plan is a monumental challenge, so the auto-parts business is mostly about making small variations on known components to the standards of a given customer. But in pharma - discovery pharma, not the generic companies - we're wrenching new stuff right out of thin air, or trying to.

So you'd think that we wouldn't be feeling the low-wage competitive pressure so much, but as the last ten years have shown, we certainly are. Outsourcing has come up many a time around here, and the very fact that it exists shows that not all of drug research is quite as bespoke as we might think. (Remember, the first wave of outsourcing, which is still very much a part of the business, was the move to send the routine methyl-ethyl-butyl-futile analoging out somewhere cheaper). And this takes us, eventually, to the Pfizer-style split between drug designers (high-wage folks over here) and the drug synthesizers (low-wage folks over there). Unfortunately, I think that you have to go the full reducio ad absurdum route to get that far, but Pfizer's going to find out for us if that's an accurate reading.

What these economists are also talking about is, I'd say, the next step beyond Moore's Law: once we have all this processing power, how do we use it? The first wave of computation-driven change happened because of the easy answers to that question: we had a lot of number-crunching that was being done by hand, or very slowly by some route, and we now had machines that could do what we wanted to do more quickly. This newer wave, if wave it is, will be driven more by software taking advantage of the hardware power that we've been able to produce.

The first wave didn't revolutionize drug discovery in the way that some people were hoping for. Sheer brute force computational ability is of limited use in drug discovery, unfortunately, but that's not always going to be the case, especially as we slowly learn how to apply it. If we really are starting to get better at computational pattern recognition and decision-making algorithms, where could that have an impact?

It's important to avoid what I've termed the "Andy Grove fallacy" in thinking about all this. I think that it is a result of applying first-computational-wave thinking too indiscriminately to drug discovery, which means treating it too much like a well-worked-out human-designed engineering process. Which it certainly isn't. But this second-wave stuff might be more useful.

I can think of a few areas: in early drug discovery, we could use help teasing patterns out of large piles of structure-activity relationship data. I know that there are (and have been) several attempts at doing this, but it's going to be interesting to see if we can do it better. I would love to be able to dump a big pile of structures and assay data points into a program and have it say the equivalent of "Hey, it looks like an electron-withdrawing group in the piperidine series might be really good, because of its conformational similarity to the initial lead series, but no one's ever gotten back around to making one of those because everyone got side-tracked by the potency of the chiral amides".

Software that chews through stacks of PK and metabolic stability data would be worth having, too, because there sure is a lot of it. There are correlations in there that we really need to know about, that could have direct relevance to clinical trials, but I worry that we're still missing some of them. And clinical trial data itself is the most obvious place for software that can dig through huge piles of numbers, because those are the biggest we've got. From my perspective, though, it's almost too late for insights at that point; you've already been spending the big money just to get the numbers themselves. But insights into human toxicology from all that clinical data, that stuff could be gold. I worry that it's been like the concentration of gold in seawater, though: really there, but not practical to extract. Could we change that?

All this makes me actually a bit hopeful about experiments like this one that I described here recently. Our ignorance about medicine and human biochemistry is truly spectacular, and we need all the help we can get in understanding it. There have to be a lot of important things out there that we just don't understand, or haven't even realized the existence of. That lack of knowledge is what gives me hope, actually. If we'd already learned what there is to know about discovering drugs, and were already doing the best job that could be done, well, we'd be in a hell of a fix, wouldn't we? But we don't know much, we're not doing it as well as we could, and that provides us with a possible way out of the fix we're in.

So I want to see as much progress as possible in the current pattern-recognition and data-correlation driven artificial intelligence field. We discovery scientists are not going to automate ourselves out of business so quickly as factory workers, because our work is still so hypothesis-driven and hard to define. (For a dissenting view, with relevance to this whole discussion, see here). It's the expense of applying the scientific method to human health that's squeezing us all, instead, and if there's some help available in that department, then let's have it as soon as possible.

Comments (32) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | In Silico | Pharmacokinetics | Toxicology

April 2, 2012

"Taking the Ax to the Scientists Is Probably a Mistake"

Email This Entry

Posted by Derek

So says Matthew Herper in Forbes, and I'm certainly not going to argue with him. His point is what he calls lack of appreciation for the human capital in drug discovery:

An ideal drug company would follow all sorts of crazy ideas in early research, with the goal of selecting those where there was a high probability of believing they would actually prove effective in clinical development. It would bulk up on scientists, and try to limit the number of large clinical trials it conducted to those where some kind of test — blood levels of some protein, perhaps — led researchers to think they had a high probability of success. (Novartis, the most successful company in terms of getting new drugs to market, has moved in this direction.) But the tendency of the shutdowns has been to shut laboratories, too. Look at Merck’s stance toward the old Organon labs or Pfizer’s decision to shut the Michigan labs where Lipitor was invented. Taking the ax to the scientists is probably a mistake.

There's always been a disconnect between the business end and the scientific end, but the stresses of the last few years have opened it up wider than ever. The business of making money from drug discovery has never been trickier (or more expensive), and the scientists themselves have never felt more threatened. I can see it in the comments here on this site, whenever the topic of layoffs or top-management incompetence comes up. There are a lot of hard feelings out there - and, really, given the way things have been going, why wouldn't there be?

But at the risk of collecting some thrown bricks myself, I see where the business people are coming from. Our current cost structures are unsustainable. And although I don't agree with the solution of laying everyone off, I don't know what I would do instead. For many companies, it would have been better to have started adjusting years ago, although there's hindsight bias to keep in mind when you think that way. Many companies did try to start adjusting years ago, only to be overwhelmed by even worse than they'd counted on. Then there are a few organizations that just look unfixable by any means anyone can think up.

But I think it's safe to say that relations between the two lobes of the drug R&D enterprise, the financial one and the scientific one, have probably never been worse. It's nothing that some success and hiring couldn't fix, but those are thin on the ground these days.

Comments (84) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

March 27, 2012

Virtual Biotech, Like It or Not

Email This Entry

Posted by Derek

We've all been hearing for a while about "virtual biotechs". The term usually refers to a company with only a handful of employees and no real laboratory space of its own. All the work is contracted out. That means that what's left back at the tiny headquarters (which in a couple of cases is as small as one person's spare bedroom) is the IP. What else could it be? There's hardly any physical property at all. It's as pure a split as you can get between intellectual property (ideas, skills, actual patents) and everything else. Here's a 2010 look at the field in San Diego, and here's a more recent look from Xconomy. (I last wrote about the topic here).

Obviously, this gets easier to do earlier in the whole drug development process, where less money is involved. That said, there are difficulties at both ends. A large number of these stories seem to involve people who were at a larger company when it ran out of money, but still had some projects worth looking at. The rest of the cases seem to come out of academia. In other words, the ideas themselves (the key part of the whole business) were generated somewhere with more infrastructure and funding. Trying to get one of these off the ground otherwise would be a real bootstrapping problem.

And at the other end of the process, getting something all the way through the clinic like this also seems unlikely. The usual end point is licensing out to someone with more resources, as this piece from Xconomy makes clear:

In the meantime, one biotech model gaining traction is the single asset, infrastructure-lite, development model, which deploys modest amounts of capital to develop a single compound to an early clinical data package which can be partnered with pharma. The asset resides within an LLC, and following the license transaction, the LLC is wound down and distributes the upfront, milestone and royalty payments to the LLC members on a pro rata basis. The key to success in this model is choosing the appropriate asset/indication – one where it is possible to get to a clinical data package on limited capital. This approach excludes many molecules and indications often favored by biotech, and tends to drive towards clinical studies using biomarkers – directly in line with one of pharma’s favored strategies.

This is a much different model, of course, than the "We're going to have an IPO and become our own drug company!" one. But the chances of that happening have been dwindling over the years, and the current funding environment makes it harder than ever, Verastem aside. It's even a rough environment to get acquired in. So licensing is the more common path, and (as this FierceBiotech story says), that's bound to have an effect on the composition of the industry. People aren't holding on to assets for as long as they used to, and they're trying to get by with as little of their own money as they can. Will we end up with a "field of fireflies" model, with dozens, hundreds of tiny companies flickering on and off? What will the business look like after another ten years of this - better, or worse?

Comments (26) + TrackBacks (0) | Category: Business and Markets | Chemical News | Drug Development | Drug Industry History

March 26, 2012

What's the Ugliest Drug? Or The Ugliest Drug Candidate?

Email This Entry

Posted by Derek

I was having one of those "drug-like properties" discussions with colleagues the other day. Admittedly, if you're not in drug discovery yourself, you probably don't have that one very often, but even for us, you'd think that a lot of the issues would be pretty settled by now. Not so.

While everyone broadly agrees that compounds shouldn't be too large or too greasy, where one draws the line is always up for debate. And the arguments gets especially fraught in the earlier stages of a project, when you're still deciding on what chemical series to work on. One point of view (the one I subscribe to) says that almost every time, the medicinal chemistry process is going to make your compound larger and greasier, so you'd better start on the smaller and leaner side to give everyone room to work in. But sometimes, Potency Rules, at least for some people and in some organizations, and there's a lead which might be stretching some definitions but is just too active to ignore. (That way, in my experience, lies heartbreak, but there are people who've made successes out of it).

We've argued these same questions here before, more than once. What I'm wondering today is, what's the least drug-like drug that's made it? It's dangerous to ask that question, in a way, because it gives some people what they see as a free pass to pursue ugly chemical matter - after all, Drug Z made it, so why not this one? (That, to my mind, ignores the ars longa, vita brevis aspect: since there's an extra one-in-a-thousand factor with some compounds, given the long odds already, why would you make them even longer?)

But I think it's still worth asking the question, if we can think of what extenuating circumstances made some of these drugs successful. "Sure, your molecular weight isn't as high as Drug Z, which is on the market, but do you have Drug Z's active transport/distribution profile/PK numbers in mice? If not, just why do you think you're going to be so lucky?"

Antibiotics are surely going to make up some of the top ten candidates - some of those structures are just bizarre. There's a fairly recent oncology drug that I think deserves a mention for its structure, too. Anyone have a weirder example of a marketed drug?

What's still making its way through the clinic can be even stranger-looking. Some of the odder candidates I've seen recently have been for the hepatitis C proteins NS5A and NS5B. Bristol-Myers Squibb has disclosed some eye-openers, such as BMS-790052. (To be fair, that target seems to really like chemical matter like this, and the compound, last I heard, was moving along through the clinic.)

And yesterday, as Carmen Drahl reported from the ACS meeting in San Diego, the company disclosed the structure of BMS-791325, a compound targeting NS5B. That's a pretty big one, too - the series it came from started out reasonably, then became not particularly small, and now seems to have really bulked up, and for the usual reasons - potency and selectivity. But overall, it's a clear example of the sort of "compound bloat" that overtakes projects as they move on.

So, nominations are open for three categories: Ugliest Marketed Drug, Ugliest Current Clinical Candidate, and Ugliest Failed Clinical Candidate. Let's see how bad it gets!

Comments (58) + TrackBacks (0) | Category: Drug Development | Drug Industry History

March 14, 2012

The Blackian Demon of Drug Discovery

Email This Entry

Posted by Derek

There's an on-line appendix to that Nature Reviews Drug Discovery article that I've been writing about, and I don't think that many people have read it yet. Jack Scannell, one of the authors, sent along a note about it, and he's interested to see what the readership here makes of it.

It gets to the point that came up in the comments to this post, about the order that you do your screening assays in (see #55 and #56). Do you run everything through a binding assay first, or do you run things through a phenotypic assay first and then try to figure out how they bind? More generally, with either sort of assay, is it better to do a large random screen first off, or is it better to do iterative rounds of SAR from a smaller data set? (I'm distinguishing those two because phenotypic assays provide very different sorts of data density than do focused binding assays).

Statistically, there's actually a pretty big difference there. I'll quote from the appendix:

Imagine that you know all of the 600,000 or so words in the English language and that you are asked to guess an English word written in a sealed envelope. You are offered two search strategies. The first is the familiar ‘20 questions’ game. You can ask a series of questions. You are provided with a "yes" or "no" answer to each, and you win if you guess the word in the envelope having asked 20 questions or fewer. The second strategy is a brute force method. You get 20,000 guesses, but you only get a "yes" or "no" once you have made all 20,000 guesses. So which is more likely to succeed, 20 questions or 20,000 guesses?

A skilled player should usually succeed with 20 questions (since 600,000 is less than than 2^20) but would fail nearly 97% of the time with "only" 20,000 guesses.

Our view is that the old iterative method of drug discovery was more like 20 questions, while HTS of a static compound library is more like 20,000 guesses. With the iterative approach, the characteristics of each molecule could be measured on several dimensions (for example, potency, toxicity, ADME). This led to multidimensional structure–activity relationships, which in turn meant that each new generation of candidates tended to be better than the previous generation. In conventional HTS, on the other hand, search is focused on a small and pre-defined part of chemical space, with potency alone as the dominant factor for molecular selection.

Aha, you say, but the game of twenty questions is equivalent to running perfect experiments each time: "Is the word a noun? Does it have more than five letters?" and so on. Each question carves up the 600,000 word set flawlessly and iteratively, and you never have to backtrack. Good experimental design aspires to that, but it's a hard standard to reach. Too often, we get answers that would correspond to "Well, it can be used like a noun on Tuesdays, but if it's more than five letters, then that switches to Wednesday, unless it starts with a vowel".

The authors try to address this multi-dimensionality with a thought experiment. Imagine chemical SAR space - huge number of points, large number of parameters needed to describe each point.

Imagine we have two search strategies to find the single best molecule in this space. One is a brute force search, which assays a molecule and then simply steps to the next molecule, and so exhaustively searches the entire space. We call this "super-HTS". The other, which we call the “Blackian demon” (in reference to the “Darwinian demon”, which is used sometimes to reflect ideal performance in evolutionary thought experiments, and in tribute to James Black, often acknowledged as one of the most successful drug discoverers), is equivalent to an omniscient drug designer who can assay a molecule, and then make a single chemical modification to step it one position through chemical space, and who can then assay the new molecule, modify it again, and so on. The Blackian demon can make only one step at a time, to a nearest neighbour molecule, but it always steps in the right direction; towards the best molecule in the space. . .

The number of steps for the Blackian demon follows from simple geometry. If you have a d dimensional space with n nodes in the space, and – for simplicity – these are arranged in a neat line, square, cube, or hypercube, you can traverse the entire space, from corner to corner with d x (n^(1/d)-1) steps. This is because each vertex is n nodes in length, and there are d vertices. . .When the search space is high dimensional (as is chemical space) and there is a very large number of nodes (as is the case for drug-like molecules), the Blackian demon is many orders of magnitude more efficient than super-HTS. For example, in a 10 dimensional space with 10^40 molecules, the Blackian demon can search the entire space in 10^5 steps (or less), while the brute force method requires 10^40 steps.

These are idealized cases, needless to say. One problem is that none of us are exactly Blackian demons - what if you don't always make the right step to the next molecule? What if your iteration only gives one out of ten molecules that get better, or one out of a hundred? I'd be interested to see how that affects the mathematical argument.

And there's another conceptual problem: for many points in chemical space, the numbers are even much more sparse. One assumption with this thought experiment (correct me if I'm wrong) is that there actually is a better node to move to each time. But for any drug target, there are huge regions of flat, dead, inactive, un-assayable chemical space. If you started off in one of those, you could iterate until your hair fell out and never get out of the hole. And that leads to another objection to the ground rules of this exercise: no one tries to optimize by random HTS. It's only used to get starting points for medicinal chemists to work on, to make sure that they're not starting in one of those "dead zones". Thoughts?

Comments (45) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

March 12, 2012

The Brute Force Bias

Email This Entry

Posted by Derek

I wanted to return to that Nature Reviews Drug Discovery article I blogged about the other day. There's one reason the authors advance for our problems that I thought was particularly well stated: what they call the "basic research/brute force" bias.

The ‘basic research–brute force’ bias is the tendency to overestimate the ability of advances in basic research (particularly in molecular biology) and brute force screening methods (embodied in the first few steps of the standard discovery and preclinical research process) to increase the probability that a molecule will be safe and effective in clinical trials. We suspect that this has been the intellectual basis for a move away from older and perhaps more productive methods for identifying drug candidates. . .

I think that this is definitely a problem, and it's a habit of thinking that almost everyone in the drug research business has, to some extent. The evidence that there's something lacking has been piling up. As the authors say, given all the advances over the past thirty years or so, we really should have seen more of an effect in the signal/noise of clinical trials: we should have had higher success rates in Phase II and Phase III as we understood more about what was going on. But that hasn't happened.

So how can some parts of a process improve dramatically, yet important measures of overall performance remain flat or decline? There are several possible explanations, but it seems reasonable to wonder whether companies industrialized the wrong set of activities. At first sight, R&D was more efficient several decades ago , when many research activities that are today regarded as critical (for example, the derivation of genomics-based drug targets and HTS) had not been invented, and when other activities (for example, clinical science, animal-based screens and iterative medicinal chemistry) dominated.

This gets us back to a topic that's come up around here several times: whether the entire target-based molecular-biology-driven style of drug discovery (which has been the norm since roughly the early 1980s) has been a dead end. Personally, I tend to think of it in terms of hubris and nemesis. We convinced ourselves that were were smarter than we really were.

The NRDD piece has several reasons for this development, which also ring true. Even in the 1980s, there were fears that the pace of drug discovery was slowing. and a new approach was welcome. A second reason is a really huge one: biology itself has been on a reductionist binge for a long time now. And why not? The entire idea of molecular biology has been incredibly fruitful. But we may be asking more of it than it can deliver.

. . .the ‘basic research–brute force’ bias matched the scientific zeitgeist, particularly as the older approaches for early-stage drug R&D seemed to be yielding less. What might be called 'molecular reductionism' has become the dominant stream in biology in general, and not just in the drug industry. "Since the 1970s, nearly all avenues of biomedical research have led to the gene". Genetics and molecular biology are seen as providing the 'best' and most fundamental ways of understanding biological systems, and subsequently intervening in them. The intellectual challenges of reductionism and its necessary synthesis (the '-omics') appear to be more attractive to many biomedical scientists than the messy empiricism of the older approaches.

And a final reason for this mode of research taking over - and it's another big one - is that it matched the worldview of many managers and investors. This all looked like putting R&D on a more scientific, more industrial, and more manageable footing. Why wouldn't managers be attracted to something that looked like it valued their skills? And why wouldn't investors be attracted to something that looked as if it could deliver more predictable success and more consistent earnings? R&D will give you gray hairs; anything that looks like taming it will find an audience.

And that's how we find ourselves here:

. . .much of the pharmaceutical industry's R&D is now based on the idea that high-affinity binding to a single biological target linked to a diseases will lead to medical benefit in humans. However, if the causal link between single targets and disease states is weaker than commonly thought, or if drugs rarely act on a single target, one can understand why the molecules that have been delivered by this research strategy into clinical development may not necessarily be more likely to succeed than those in earlier periods.

That first sentence is a bit terrifying. You read it, and part of you thinks "Well, yeah, of course", because that is such a fundamental assumption of almost all our work. But what if it's wrong? Or just not right enough?

Comments (64) + TrackBacks (0) | Category: Drug Development | Drug Industry History

March 9, 2012

Coaching For Success. Sure.

Email This Entry

Posted by Derek

As some of you know, I'm guest-blogging at The Atlantic this week and next. I think the readership here would enjoy the post I have up today, which draws on some drug-industry experiences of mine. . .

Comments (15) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

March 8, 2012

Eroom's Law

Email This Entry

Posted by Derek

There's another "Troubles of Drug Discovery" piece in Nature Reviews Drug Discovery, but it's a good one. It introduces the concept of "Eroom's Law", and if you haven't had your coffee yet (don't drink it, myself, actually), that's "Moore's Law" spelled backwards. It refers, as you'd fear, to processes that are getting steadily slower and more difficult with time. You know, like getting drugs to market seems to be.

Eroom's Law indicates that powerful forces have outweighed scientific, technical and managerial improvements over the past 60 years, and/or that some of the improvements have been less 'improving' than commonly thought. The more positive anyone is about the past several decades of progress, the more negative they should be about the strength of countervailing forces. If someone is optimistic about the prospects for R&D today, they presumably believe the countervailing forces — whatever they are — are starting to abate, or that there has been a sudden and unprecedented acceleration in scientific, technological or managerial progress that will soon become visible in new drug approvals.

Here's the ugly trend (dollars are inflation-adjusted:
R%26D%20trend.png

I particularly enjoyed, in a grim way, this part:

However, readers of much of what has been written about R&D productivity in the drug industry might be left with the impression that Eroom's Law can simply be reversed by strategies such as greater management attention to factors such as project costs and speed of implementation, by reorganizing R&D structures into smaller focused units in some cases or larger units with superior economies of scale in others, by outsourcing to lower-cost countries, by adjusting management metrics and introducing R&D 'performance scorecards', or by somehow making scientists more 'entrepreneurial'. In our view, these changes might help at the margins but it feels as though most are not addressing the core of the productivity problem.

In the original paper, each of those comma-separated phrases is referenced to the papers that have proposed them, which is being rather scrupulously cruel. But I don't blame the authors, and I don't really disagree with their analysis, either. As they go on to say, investors don't seem to disagree, either. The cost-cutting that we're seeing everywhere, particularly cutbacks in research (see all that Sanofi stuff the other day!) are the clearest indicator. People are acting as if the return on pharmaceutical R&D is insufficient compared to the cost of capital, and if you think differently, well, now's a heck of a time to clean up as a contrarian.

Now, the companies (and CEOS) involved in this generally talk about how they're going to turn things around, how cutting their own research will put things on a better footing, how doing external deals will more than make up for it, and so on. But it's getting increasingly hard to believe that. We are heading, at speed, for a world in which fewer and fewer useful medicines are discovered, while more and more people want them.

The authors have four factors that they highlight which have gotten us into this fix, and all four of them are worth discussing (although not all in one post!) The first is what they call the "Better Than the Beatles" effect. That's what we face as we continue to compete against our greatest hits of the past. Take generic Lipitor, as a recent example. It's cheap, and it certainly seems to do the job it's prescribed for (lowering LDL). Between it and the other generic statins, you're going to have a rocky uphill climb if you want to bring a new LDL-lowering therapy to market (which is why not many people are trying to do that).

I think that this is insufficiently appreciated outside of the drug business. Nothing goes away unless it's well and truly superseded. Aspirin is still with is. Ibuprofen still sells like crazy. Blood pressure medicines are, in many cases, cheap as dirt, and the later types are inexorably headed that way. Every single drug that we discover is headed that way; patents are wasting assets, even patents on biologics, although those have been wasting more slowly (with the pace set to pick up). As this paper points out, very few other industries have this problem, or to this degree. (Even the entertainment industry, whose past productions do form a back catalog, has the desire for novelty on its side). But we're in the position of someone trying to come up with a better comb.

More on their other reasons in the next posts - there are some particularly good topics in there, and I don't want to mix everything together. . .

Comments (45) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Drug Prices

February 10, 2012

The Terrifying Cost of a New Drug

Email This Entry

Posted by Derek

Matthew Herper at Forbes has a very interesting column, building on some data from Bernard Munos (whose work on drug development will be familiar to readers of this blog). What he and his colleague Scott DeCarlo have done is conceptually simple: they've gone back over the last 15 years of financial statements from a bunch of major drug companies, and they've looked at how many drugs each company has gotten approved.

Over that long a span, things should even out a bit. There will be some spending which won't show up in the count, that took place on drugs that got approved during the earlier part that span, but (on the back end) there's spending on drugs in there that haven't made it to market yet, too. What do the numbers look like? Hideous. Appalling. Unsustainable.

AstraZeneca, for example, got 5 drugs on the market during this time span, the worst performance on this list, and thus spent spent nearly $12 billion dollars per drug. No wonder they're in the shape they're in. GSK, Sanofi, Roche, and Pfizer all spent in the range of $8 billion per approved drug. Amgen did things the cheapest by this measure, 9 drugs approved at about 3.7 billion per drug.

Now, there are several things to keep in mind about these numbers. First - and I know that I'm going to hear about this from some people - you might assume that different companies are putting different things under the banner of R&D for accounting purposes. But there's a limit to how much of that you can do. Remember, there's a separate sales and marketing budget, too, of course, and people never get tired of pointing out that it's even larger than the R&D one. So how inflated can these figures be? Second, how can these numbers jibe with the 800-million-per-new-drug (recently revised to $1 billion), much less with the $43 million per new drug figure (from Light and Warburton) that was making the rounds a few months ago?

Well, I tried to dispose of that last figure at the time. It's nonsense, and if it were true, people would be lining up to start drug companies (and other people would be throwing money at them to help). Meanwhile, the drug companies that already exist wouldn't be frantically firing thousands of people and selling their lab equipment at auction. Which they are. But what about that other estimate, the Tufts/diMasi one? What's the difference?

As Herper rightly says, the biggest factor is failure. The Tufts estimate is for the costs racked up by one drug making it through. But looking at the whole R&D spend, you can see how money is being spent for all the stuff that doesn't get through. And as I and many of the other readers of this blog can testify, there's an awful lot of it. I'm now in my 23rd year of working in this industry, and nothing I've touched has ever made it to market yet. If someone wins $500 from a dollar slot machine, the proper way to figure the costs is to see how many dollars, total, they had to pump into the thing before they won - not just to figure that they spent $1 to win. (Unless, of course, they just sat down, and in this business we don't exactly have that option).

No, these figures really show you why the drug business is in the shape it's in. Look at those numbers, and look at how much a successful drug brings in, and you can see that these things don't always do a very good job of adding up. That's with the expenses doing nothing but rising, and the success rate for drug discovery going in the other direction, too. No one should be surprised that drug prices are rising under these conditions. The surprise is that there are still people out there trying to discover drugs.

Comments (62) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices

January 26, 2012

Putting a Number on Chemical Beauty

Email This Entry

Posted by Derek

There's a new paper out in Nature Chemistry called "Quantifying the Chemical Beauty of Drugs". The authors are proposing a new "desirability score" for chemical structures in drug discovery, one that's an amalgam of physical and structural scores. To their credit, they didn't decide up front which of these things should be the miost important. Rather, they took eight properties over 770 well-known oral drugs, and set about figuring how much to weight each of them. (This was done, for the info-geeks among the crowd, by calculating the Shannon entropy for each possibility to maximize the information contained in the final model). Interestingly, this approach tended to give zero weight to the number of hydrogen-bond acceptors and to the polar surface area, which suggests that those two measurements are already subsumed in the other factors.

And that's all fine, but what does the result give us? Or, more accurately, what does it give us that we haven't had before? After all, there have been a number of such compound-rating schemes proposed before (and the authors, again to their credit, compare their new proposal with the others head-to-head). But I don't see any great advantage. The Lipinski "Rule of 5" is a pretty simple metric - too simple for many tastes - and what this gives you is a Rule of 5 with both categories smeared out towards each other to give some continuous overlap. (See the figure below, which is taken from the paper). That's certainly more in line with the real world, but in that real world, will people be willing to make decisions based on this method, or not?
QED%20paper%20chart%20png.png
The authors go for a bigger splash with the title of the paper, which refers to an experiment they tried. They had chemists across AstraZeneca's organization assess some 17,000 compounds (200 or so for each) with a "Yes/No" answer to "Would you undertake chemistry on this compound if it were a hit?" Only about 30% of the list got a "Yes" vote, and the reasons for rejecting the others were mostly "Too complex", followed closely by "Too simple". (That last one really makes me wonder - doesn't AZ have a big fragment-based drug design effort?) Note also that this sort of experiment has been done before.

Applying their model, the mean score for the "Yes" compounds was 0.67 (s.d.0.16), and the mean score for the "No" compounds was 0.49 (s.d. 0.23, which they say was statistically significant, although that must have been a close call. Overall, I wouldn't say that this test has an especially strong correlation with medicinal chemists' ideas of structural attractiveness, but then, I'm not so sure of the usefulness of those ideas to start with. I think that the two ends of the scale are hard to argue with, but there's a great mass of compounds in the middle that people decide that they like or don't like, without being able to back up those statements with much data. (I'm as guilty as anyone here).

The last part of the paper tries to extend the model from hit compounds to the targets that they bind to - a druggability assessment. The authors looked through the ChEMBL database, and ranked the various target by the scores of the ligands that are associated with them. They found that their mean ligand score for all the targets in there is 0.478. For the targets of approved drugs, it's 0.492, and for the orally active ones it's 0.539 - so there seems to be a trend, although if those differences reached statistical significance, it isn't stated in the paper.

So overall, I find nothing really wrong with this paper, but nothing spectacularly right with it, either. I'd be interested in hearing other calls on it as it gets out into the community. . .

Comments (22) + TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico | Life in the Drug Labs

December 28, 2011

Nowhere to Go But Up?

Email This Entry

Posted by Derek

I wanted to let people know that I've got a "Perspective" piece in ACS Medicinal Chemistry Letters, entitled "Nowhere to Go But Up?". The journal is starting to run these opinion/overview articles, and contacted me for one - I hope it's the sort of thing that they were looking for!

Comments (38) + TrackBacks (0) | Category: Drug Industry History | The Scientific Literature

December 13, 2011

The Sirtuin Saga

Email This Entry

Posted by Derek

Science has a long article detailing the problems that have developed over the last few years in the whole siturin story. That's a process that I've been following here as well (scrolling through this category archive will give you the tale), but this is a different, more personality-driven take. The mess is big enough to warrant a long look, that 's for sure:

". . .The result is mass confusion over who's right and who's wrong, and a high-stakes effort to protect reputations, research money, and one of the premier theories in the biology of aging. It's also a story of science gone sour: Several principals have dug in their heels, declined to communicate, and bitterly derided one another. . ."

As the article shows, one of the problems is that many of the players in this drama came out of the same lab (Leonard Guarente's at MIT), so there are issues even beyond the usual ones. Mentioned near the end of the article is the part of the story that I've spent more time on here, the founding of Sirtris and its acquisition by GlaxoSmithKline. It's safe to say that the jury is still out on that one - from all that anyone can tell from outside, it could still work out as a big diabetes/metabolism/oncology success story, or it could turn out to have been a costly (and arguably preventable) mistake. There are a lot of very strongly held opinions on both sides.

Overall, since I've been following this field from the beginning, I find the whole thing a good example of how tough it is to make real progress in fundamental biology. Here you have something that is (or at the very least has appeared to be) very interesting and important, studied by some very hard-working and intelligent people all over the world for years now, with expenditure of huge amounts of time, effort, and money. And just look at it. The questions of what sirtuins do, how they do it, and whether they can be the basis of therapies for human disease - and which diseases - are all still the subject of heated argument. Layers upon layers of difficulty and complexity get peeled back, but the onion looks to be as big as it ever was.

I'm going to relate this to my post the other day about the engineer's approach to biology. This sort of tangle, which differs only in degree and not in kind from many others in the field, illustrates better than anything else how far away we are from formalism. Find some people who are eager to apply modern engineering techniques to medical research, and ask them to take a crack at the sirtuins. Or the nuclear receptors. Or autoimmune disease, or schizophrenia therapies. Turn 'em loose on one of those problems, come back in a year, and see what color their remaining hair is.

Comments (9) + TrackBacks (0) | Category: Aging and Lifespan | Drug Development | Drug Industry History

December 9, 2011

Pharma Overview

Email This Entry

Posted by Derek

Here's a report from Science Careers on "A Pharma Industry in Crisis". Readers here will find much of what's said to be familiar - partly because they interviewed people like me and Chemjobber for the piece (!) But it's worth a look as a where-we-are-now perspective.

Comments (27) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

Drugs, Airplanes, and Radios

Email This Entry

Posted by Derek

Wavefunction has a good post in response to this article, which speculates "If we designed airplanes the way we design drugs. . ." I think the original article is worth reading, but some - perhaps many - of its points are arguable. For example:

Every drug that fails in a clinical trial or after it reaches the market due to some adverse effect was “bad” from the day it was first drawn by the chemist. State-of-the-art in silico structure–property prediction tools are not yet able to predict every possible toxicity for new molecular structures, but they are able to predict many of them with good enough accuracy to eliminate many poor molecules prior to synthesis. This process can be done on large chemical libraries in very little time. Why would anyone design, synthesize, and test molecules that are clearly problematic, when so many others are available that can also hit the target? It would be like aerospace companies making and testing every possible rocket motor design rather than running the simulations that would have told them ahead of time that disaster or failure to meet performance specifications was inevitable for most of them.

This particular argument mixes up several important points which should remain separate. Would these simulations have predicted those adverse-effect failures the author mentions? Can they do so now, ex post facto? That would be a very useful piece of information, but in its absence I can't help but wonder if the tools he's talking about would have cheerfully passed Vioxx, or torcetrapib, or the other big failures of recent years. Another question to ask is how many currently successful drugs these tox simulations would have killed off - any numbers there?

The whole essay recalls Lazebnik's famous paper "Can A Biologist Fix A Radio?" (PDF). This is an excellent place to start if you want to explore what I've called the Andy Grove Fallacy. Lazebnik's not having any of the reasons I give for it being a fallacy - for example:

A related argument is that engineering approaches are not applicable to cells because these little wonders are fundamentally different from objects studied by engineers. What is so special about cells is not usually specified, but it is implied that real biologists feel the difference. I consider this argument as a sign of what I call the urea syndrome because of the shock that the scientific community had two hundred years ago after learning that urea can be synthesized by a chemist from inorganic materials. It was assumed that organic chemicals could only be produced by a vital force present in living organisms. Perhaps, when we describe signal transduction pathways properly, we would realize that their similarity to the radio is not superficial. . .

That paper goes on to call for biology to come up with some sort of formal language and notation to describe biochemical systems, something that would facilitate learning and discovery in the same way as circuit diagrams and the like. And that's a really interesting proposal on several levels: would that help? Is it even possible? If so, where to even start? Engineers, like the two authors of the papers I've quoted from, tend to answer "Yes", "Certainly", and "Start anywhere, because it's got to be more useful than what you people have to work with now". But I'm still not convinced.

I've talked about my reasons for this before, but let me add another one: algorithmic complexity. Fields more closely based on physics can take advantage of what's been called "the unreasonable effectiveness" of mathematics. And mathematics, and the principles of physics that can be stated in that form, give an amazingly compact and efficient description of the physical world. Maxwell's equations are a perfect example: there's classical electromagnetism for you, wrapped up into a beautiful little sculpture.

But biological systems are harder to reduce - much harder. There are so many nonlinear effects, so many crazy little things that can add up to so much more than you'd ever think. Here's an example - I've been writing about this problem for years now. It's very hard to imagine compressing these things into a formalism, at least not one that would be useful enough to save anyone time or effort.

That doesn't mean it isn't worth trying. Just the fact that I have trouble picturing something doesn't mean it can't exist, that's for sure. And I'd definitely like to be wrong about this one. But where to begin?

Comments (36) + TrackBacks (0) | Category: Drug Development | Drug Industry History

December 8, 2011

The Loss of the Middle (Drugs and the People Who Find Them)

Email This Entry

Posted by Derek

This report on a speech by Roche's CEO, Severin Schwan, will surprise no one. He's forecasting that the pharma world is heading for a bimodal distribution. On one end, you'll have the companies that have managed to find things new enough and efficacious enough to convince regulatory agencies and payers that they're worth the price. And on the other, you'll have the generics. The in-between stuff, the me-too drugs and line extensions and things that don't work as well as anyone had hoped - that's going to get squeezed, and if that's all you have in your product portfolio, you're going to get squeezed, too. It's not that those things have no value, but they don't have enough to keep R&D efforts going at their current attrition rates and expenditures.

The analogy to the people doing this work is pretty close, too. Look at Pfizer's plans (which as far as I know are still in effect) to have a smaller number of "drug designers" and a bunch of lower-cost people cranking out the compounds in the lab. That's the same bimodal landscape, right there. You have a smaller, highly compensated group at one end of the scale, and a larger, less costly group at the other. What disappears are the folks in the middle.

The problem is, you can assign marketed drugs to the expensive-or-generic categories pretty rationally, based on efficacy and pricing. But assigning the people, well, that's a different matter. How exactly do you identify your star "drug designers"? Even after you narrow down to only the smarter and harder-working people, there are still more of them around than you need under that Pfizer system. So where do they go? Well, we've all been seeing the answer that question. Out on the street, and out into the job market, there to take their chances.

And at the other end, there are probably a lot of people in the make-this-list-of-analogs labs who are capable of much more than that, but haven't had the chance to prove themselves. The whole situation seems like a real misuse of human capital, and we really have to find conditions that don't lead to such wastes. But what conditions are those, and how do we get to them?

Comments (33) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

December 5, 2011

Naming Your Company After Yourself

Email This Entry

Posted by Derek

This morning's post got me to thinking - are there any examples of modern biopharma companies that have taken the name of their founder and come out well? Back in Ye Olde Days, that was the default setting, of course, as it was for most companies. If you founded an industrial concern, you either named the thing after yourself, or called it something dead-on obvious like The American Rubber Gasket Company. There's no telling what people would have thought in 1882 if you'd decided to call your company some. . .made-up word instead.

But I'm trying to think of a successful last-name-of-founder drug company in recent years, and I'm drawing a blank. Am I missing something, or should we put this on the list of potential warning signs?

Comments (60) + TrackBacks (0) | Category: Drug Industry History

November 30, 2011

Lipitor Expiration Day

Email This Entry

Posted by Derek

As one of Garrison Keillor's characters says (in WLT), "I always knew the end would come. And here it is, the end". Lipitor (atorvastatin) goes off patent today, and I can recommend this overview by Matthew Herper at Forbes. Will there ever be another drug like it? The people developing the CETP inhibitors hope so. . .

Comments (15) + TrackBacks (0) | Category: Cardiovascular Disease | Drug Industry History

November 28, 2011

So What Did Lipitor Do for Pfizer? Or Its Shareholders?

Email This Entry

Posted by Derek

That's what this columnist at the Harvard Business Review would like to know. To the question "Was it worth it?", he answers "Probably not", and lists some things that other companies might learn from Pfizer's experience. I doubt that anyone will, though - the Big Acquisition looks so compelling when it comes along, and it's such a once-in-a-lifetime opportunity, and so different from all those other examples from the past, that gee, there's just no alternative. Right?

Here, for reference, is Pfizer stock versus the S&P 500 since the merger was completed in June 2000. Not that the rest of Big Pharma looks much better - for example, Eli Lilly has been an even worse investment over that span (by a bit), and they're never merged with anyone. (Although there is that Imclone business. . .)

No, big drug companies have been horrendous, hair-curling investments over this span, and yes, I'm not fully taking dividends into account. But there are tax consequences to consider on those, too, versus buy-and-hold capital appreciation. The S&P 500 has been paying in the 2% dividend yield range over that span, while Pfizer's dividend payouts have fluctuated (and the yields, too, of course). But is any dividend yield worth taking a 60% principal hit? It's hard to imagine.

At the very least, then, Pfizer's strategy has not allowed it to stand out. Its stock is in the same nasty shape as its brethren - you have to think that nothing would have gotten much worse if they'd never Lipitored themselves, and things might well have been better. Some record!

Comments (17) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

November 22, 2011

Regeneron Finally Makes It to the Market

Email This Entry

Posted by Derek

I've been doing drug research since 1989 myself, which means that I'm fairly experienced. But Regeneron started in this business a year or two before I did, and they're just now getting their first major drug, Eylea (aflibercept) onto the market. To be fair, they did get approval for Araclyst (rilonacept) in 2008, but that one pays the electric bill and not much more - although that might be changing (see below).

As Andrew Pollack at the New York Times points out, the company has run through over two billion dollars over the years. I remember when they were working on nerve growth factors for ALS and other diseases, back in the early 1990s (I worked in the area briefly myself, to no good effect whatsoever). There are not a lot of nerve growth factor drugs on the market, although it seemed like a perfectly plausible mechanism for one back then.

That work shaded into another indication, ciliary neurotrophic factor for obesity. Regeneron spent a lot of time and money developing a modified form of that protein called Axokine, but in 2003 that project ran into the rocks. Some patients did lose weight on the drug (with daily injections), but too many of them developed antibodies to it, which raised the possibility of cross-reactivity with their own CNF, which would surely not have been a good thing. So much for Axokine.

But Eylea, a VEGF-based therapy for macular degeneration (entering the same space as Lucentis and Avastin), has now made it. And the company has another use for Arcalyst in preventative gout therapy coming along, and some interesting cholesterol work targeting PCSK9 in collaboration with Sanofi. So welcome, Regeneron, to the ranks of profitable biotech companies (well, pretty soon) who've developed their own products. It's taken a lot of time, a lot of patience - yours and your investors' - and a lot of cash. But you're still here, and how many other bioctech startups from the late 1980s can say that?

Comments (6) + TrackBacks (0) | Category: Cardiovascular Disease | Diabetes and Obesity | Drug Industry History | Regulatory Affairs | The Central Nervous System

November 21, 2011

Of Drug Research and Moneyball

Email This Entry

Posted by Derek

This piece on Michael Lewis and Billy Beane is nice to read, even if you haven't read Moneyball. (And if you haven't, consider doing so - it's not perfect, but it's well worth the time). Several thoughts occurred to me while revisiting all this, some of them actually relevant to drug discovery.

First off, a quick peaen to Bill James. I read his Baseball Abstract books every year back in the 1980s, and found them exhilarating. And that's not just because I was following baseball closely. I was in grad school, and was up to my earlobes in day-to-day scientific research for the first time, and here was someone who applied the same worldview to a sport. Baseball had long been full of slogans and sayings, folk wisdom and beliefs, and James was willing to dig through the numbers to see which of these things were true and which weren't. His willingness to point out those latter cases, and the level of evidence he brought to those takedowns, was wonderful to see. I still have a lot of James' thoughts in my head; his books may well have changed my life a bit. I was already inclined that way, but his example of fearlessly questioning Stuff That Everybody Knows really strengthened my resolve to try to do the same.

A lot of people feel that way, I've found - there are James fans all over the place, people were were influenced the same way, at the same time, by the same books. It took a while for that attitude to penetrate the sport that those books were written about, though, as that article linked to above details. And its success once it did was part of a broader trend:

Innovation hurts. After Beane began using numbers to find players, the A’s’ scouts lost their lifelong purpose. In the movie, one of them protests to Pitt: “You are discarding what scouts have done for 150 years.” That was exactly right. Similar fates had been befalling all sorts of lesser-educated American men for years, though the process is more noticeable now than it was in 2003 when Moneyball first appeared. The book, Lewis agrees, is partly “about the intellectualisation of a previously not intellectual job. This has happened in other spheres of American life. I think the reason I saw the story so quickly is, this is exactly what happened on Wall Street while I was there. . .”

(That would be during the time of Liar's Poker, which still a fun and interesting book to read, although it describes a time that's much longer ago than the calendar would indicate). And I think that the point is a good one. I'd add that the process has also been driven by the availability of computing power. When you had to bash the numbers by hand, with a pencil, there was only so much you could do. Spreadsheets and statistical software, graphing programs and databases - these have allowed people to extract meaning from numbers without having to haul up every shovelful by hand. And it's given power to those people who are adept at extracting that meaning (or at least, to the people willing to act on their conclusions).

The article quotes Beane as saying that Lewis understood what he was doing within minutes: "You’re arbitraging the mispricing of baseball players". And I don't think that it can be put in fewer words: that's exactly what someone with a Wall Street background would make of it, and it's exactly right. Now to our own business. Can you think of an industry whose assets are mispriced more grievously, and more routinely, than drug research?

Think about it. All those preclinical programs that never quite work out. All those targets that don't turn out to be the right target when you get to Phase II. All those compounds that blow up in Phase III because of unexpected toxicity. By working on them, by putting time and effort and money into them, we're pricing them. And too much of the time, we're getting that price wrong, terribly wrong.

That's what struck me when I read Moneyball several years ago. The problem is, drug research is not baseball, circa 1985. We're already full of statisticians, computational wizards, and sharp-eyed people who are used to challenging the evidence and weighing the facts. And even with that, this is the state we're in. The history of drug research is one attempt after another to find some edge, some understanding, that can be used to correct that constant mispricing of our assets. What to do? If the salt has lost its savour, wherewith shall it be salted?

Comments (17) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why

November 18, 2011

Two From Glaxo's Old Days

Email This Entry

Posted by Derek

Two of the scientists behind Glaxo's rise have passed away recently, within a couple of weeks of each other. There's John Bradshaw, who joined Allen and Hanburys in 1971. He was the chemist who discovered Zantac (ranitidine) in 1976. Later, he moved into computational chemistry and made a key insight that led to the discovery of Salmeterol, one of the two drugs that make up Advair. Not many people have ever put their fingerprints on two bigger compounds in one medicinal chemistry career.

And closely intertwined with these projects, and with at least five others that made it to market, was pharmacologist Sir David Jack, who'd joined Allen and Hanburys ten years earlier. Remarkably, he kept up his research in the field after retirement, and a compound he championed (RPL554) is even now in clinical trials from Verona Pharma.

Discoveries, never forget, don't make themselves. They're made by people, and it's well worth paying attention to people who've made several. Odds are that they are (or were) doing something right. . .

Comments (7) + TrackBacks (0) | Category: Drug Industry History

November 16, 2011

Ray Firestone's Take On Pharma's Plight

Email This Entry

Posted by Derek

And while I'm linking out to other opinion pieces, Ray Firestone has a cri du couer in Nature Reviews Drug Discovery, looking back over his decades in the business. Regular readers of this blog (or of Ray Firestone!) will recognize all the factors he talks about, for sure. He talks about creativity (and its reception at some large companies), the size of an organization and its relation to productivity, and what's been driving a lot of decisions over the last ten or twenty years. To give you a sample:

if size is detrimental to an innovative research culture, mergers between large companies should make things worse — and they do. They have a strong negative personal impact on researchers and, consequently, the innovative research environment. For example, the merger of Bristol-Myers with Squibb in 1989, which I witnessed, was a scene of power grabs and disintegrating morale. Researchers who could get a good offer left the company, and the positions of those who remained were often decided by favouritism rather than talent. Productivity fell so low that an outside firm was hired to find out why. Of course, everyone knew what was wrong but few — if any — had the nerve to say it.

Comments (26) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why

Virtual Pharma, Revisited

Email This Entry

Posted by Derek

John LaMattina takes on the perennial question of "Should a big drug company ditch R&D and just inlicense everything?". That one comes up regularly, and I've never been able to quite see how it works. (You'd also figure that since it's not exactly a new idea, that various people at said companies have run the numbers and can't see how it works, either). But as a former Pfizer honcho, LaMattina's opinion on this topic carries more weight than most.

Comments (11) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

October 28, 2011

Merck, And What Used to Be Schering-Plough

Email This Entry

Posted by Derek

The cutbacks at Merck seem to have been pretty severe, if the messages that I'm getting from former Schering-Plough people are any indication. A lot of longtime R&D people have been let go, which is no surprise when you see what's been happening over the last few years with Pfizer's acquisitions (just to pick the biggest example). Experience, past accomplishments, and ability are not very high at all on the list of factors being judged when it comes to this point.

It's worth asking just how well that whole Schering-Plough deal is going for Merck, though. Here's a thorough breakdown of all the pipelines at the time the deal was going through. You can see that some of the areas (women's health, respiratory) have worked out as planned, but some others (cardiovascular, hepatitis C) have definitely not. And (as that link makes clear) one of the big variables when the deal went through was how much money would be left from the J&J deal after arbitration. If you look at the company's earnings, it's a mixed bag. Singulair is the biggest on the list, but that one's going off patent next year. Remicade is bringing in some money, after the territories were split up, with Merck holding on to Europe, Russia, and Turkey. The only other product from the Schering-Plough deal on the top-selling list is Nasonex, and that just makes the cut.

I just have to wonder how different this press release would have been if the deal hadn't gone through at all. But sales figures aside, what we don't see is the huge disruption in research and early development, just as you don't see that in Pfizer's deals over the years. You don't notice the drugs that don't get discovered, the early projects that don't quite advance. Was it all really worth it?

Like all the other mergers, this one only makes sense if you factor in big cost reductions - that DataMonitor link above makes this clear. And Merck does indeed look as if they're cutting their expenses as planned, so perhaps these numbers will come out right on target, and earnings-per-share will follow along. But what happened to Ken Frazier's brave attempt to withdraw EPS guidance entirely and focus on rebuilding the company's R&D? Was that just window dressing, was it an honest effort to change things that has now been abandoned, or what?

Comments (72) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

October 11, 2011

Too Many Cancer Drugs? Too Few? About Right?

Email This Entry

Posted by Derek

According to Bruce Booth (@LifeSciVC on Twitter), Ernst & Young have estimated the proportion of drugs in the clinic in the US that are targeting cancer. Anyone want to pause for a moment to make a mental estimate of their own?

Well, I can tell you that I was a bit low. The E&Y number is 44%. The first thought I have is that I'd like to see that in some historical perspective, because I'd guess that it's been climbing for at least ten years now. My second thought is to wonder if that number is too high - no, not whether the estimate is too high. Assuming that the estimate is correct, is that too high a proportion of drug research being spent in oncology, or not?

Several factors led to the rise in the first place - lots of potential targets, ability to charge a lot for anything effective, an overall shorter and more definitive clinical pathway, no need for huge expensive ad campaigns to reach the specialists. Have these caused us to overshoot?

Comments (22) + TrackBacks (0) | Category: Cancer | Clinical Trials | Drug Development | Drug Industry History

October 7, 2011

Different Drug Companies Make Rather Different Compounds

Email This Entry

Posted by Derek

Now here's a paper, packed to the edges with data, on what kinds of drug candidate compounds different companies produce. The authors assembled their list via the best method available to outsiders: they looked at what compounds are exemplified in patent filings

What they find is that over the 2000-2010 period that not much change has taken place, on average, in the properties of the molecules that are showing up. Note that we're assuming, for purposes of discussion, that these properties - things like molecular weight, logP, polar surface area, amount of aromaticity - are relevant. I'd have to say that they are. They're not the end of the discussion, because there are plenty of drugs that violate one or more of these criteria. But there are even more that don't, and given the finite amount of time and money we have to work with, you're probably better off approaching a new target with five hundred thousand compounds that are well within the drug-like properties boxes rather than five hundred thousand that aren't. And at the other end of things, you're probably better off with ten clinical candidates that mostly fit versus ten that mostly don't.

But even if overall properties don't seem to be changing much, that doesn't mean that there aren't differences between companies. That's actually the main thrust of the paper: the authors compare Abbott, Amgen, AstraZeneca, Bayer-Schering, Boehringer, Bristol-Myers Squibb, GlaxoSmithKline, J&J, Lilly, Merck, Novartis, Pfizer, Roche, Sanofi, Schering-Plough, Takeda, Wyeth, and Vertex. Of course, these organizations filed different numbers of patents, on different targets, with different numbers of compounds. For the record, Merck and GSK filed the most patents during those ten years (over 1500), while Amgen and Takeda filed the fewest (under 300). Merck and BMS had the largest number of unique compounds (over 70,000), and Takeda and Bayer-Schering had the fewest (in the low 20,000s). I should note that AstraZeneca just missed the top two in both patents and compounds.
radar%20plot.jpg
If you just look at the raw numbers, ignoring targeting and therapeutic areas, Wyeth, Bayer-Schering, and Novartis come out looking the worst for properties, while Vertex and Pfizer look the best. But what's interesting is that even after you correct for targets and the like, that organizations still differ quite a bit in the sorts of compounds that they turn out. Takeda, Lilly, and Wyeth, for example, were at the top of the cLogP rankings (numberically, "top" meaning the greasiest). Meanwhile, Vertex, Pfizer, and AstraZeneca were at the other end of the scale in cLogP. In molecular weight, Novartis, Boehringer, and Schering-Plough were at the high end (up around 475), while Vertex was at the low end (around 425). I'm showing a radar-style plot from the paper where they cover several different target-unbiased properties (which have been normalized for scale), and you can see that different companies do cover very different sorts of space. (The numbers next to the company names are the total number of shared targets found and the total number of shared-target observations used - see the paper if you need more details on how they compiled the numbers).

Now, it's fair to ask how relevant the whole sweep of patented compounds might be, since only a few ever make it deep into the clinic. And some companies just have different IP approaches, patenting more broadly or narrowly. But there's an interesting comparison near the end of the paper, where the authors take a look at the set of patents that cover only single compounds. Now, those are things that someone has truly found interesting and worth extra layers of IP protection, and they average to significantly lower molecular weights, cLogP values, and number of rotatable bonds than the general run of patented compounds. Which just gets back to the points I was making in the first paragraph - other things being equal, that's where you'd want to spend more of your time and money.

What's odd is that the trends over the last ten years haven't been more pronounced. As the paper puts it:

blockquote>Over the past decade, the mean overall physico-chemical space used by many pharmaceutical companies has not changed substantially, and the overall output remains worryingly at the periphery of historical oral drug chemical space. This is despite the fact that potential candidate drugs, identified in patents protecting single compounds, seem to reflect physiological and developmental pressures, as they have improved drug-like properties relative to the full industry patent portfolio. Given these facts, and the established influence of molecular properties on ADMET risks and pipeline progression, it remains surprising that many organizations are not adjusting their strategies.

The big question that this paper leaves unanswered, because there's no way for them to answer it, is how these inter-organizational differences get going and how they continue. I'll add my speculations in another post - but speculations they will be.

Comments (30) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

September 29, 2011

Ah, Remember Those Days? How Will We Remember These?

Email This Entry

Posted by Derek

If you want to know what it was like during the height of the genomics frenzy, here's a quote for you, from an old Adam Feuerstein post. Return with me to the year 2000:

During his presentation Wednesday, Mark Levin, the very enthusiastic CEO of Millennium Pharmaceuticals (MLNM), remarked that his company's gene-to-patient technology would turbo-charge drug-development productivity to levels never before seen in the industry. Just how productive? Well, he predicted that by 2005, Millennium would be pushing one or two new drugs every year onto the market, while keeping the pipeline brimming with at least five experimental drugs entering human clinical trials every year.

Note that I'm not trying to make fun of Millennium, or of Mark Levin (still helping to found new companies at Third Rock Ventures). A lot of people were talking the same way back then - although, to be sure, Feuerstein notes that many people in the audience had trouble believing this one, too. But there's no doubt that a wild kind of optimism was in the air then. (Here's another Levin interview from that era).

That's as opposed to the wild kind of pessimism that's in the air these days. Here's hoping that it turns out to be as strange, in retrospect, as this earlier craziness. And yes, I know that the current reasons for pessimism are, in fact, rather bonier and more resilient than the glowing clouds of the genomics era were. But it's still possible to overdo it. Right?

Comments (22) + TrackBacks (0) | Category: Drug Industry History

September 27, 2011

What Layoffs Have Done

Email This Entry

Posted by Derek

There's an op-ed piece over at Pharmalot that I think that many readers here will find interesting. It's by Daniel Hoffman, formerly employed in pharma, it appears, and now a consultant. He's writing about the waves of layoffs the industry has experienced over the last few years, but he's not talking so much about the people who are gone, as the ones who are left:

In addition to disrupting tens of thousands of lives, the substantial downsizing in pharma over the past two-and-a-half years has changed many companies for the worse. I previously wrote that the guidelines handed down from finance to HR have eliminated many of the more knowledgeable and experienced people at each layoff round because people over age 50 are among the first targets for separation packages. But the dysfunctional legacy is even more pernicious. The resulting culture has created a workforce that is almost entirely at odds with what pharma needs now.

What sort of workforce is that? Hoffman's take is that the people who have survived under these conditions are disproportionately those who don't rock the boat, who keep their heads down, and who keep the top management as unperturbed as possible:

Many of the people remaining in operations deliberately choose not to ask big or important questions, lest their colleagues perceive any fundamental doubt as a threat. The truly adept manage to avoid taking a position on even the most mundane matters, lest someone else equate perceptive questions with disloyalty. Some even find it wise to feign ignorance concerning the elephants in various rooms. The combination of such simulated ignorance, together with the genuine version among the inexperienced survivors, makes the task of determining the smartest guy in the room a purely theoretical exercise.

I think that these are tendencies built in to most large organizations, but it wouldn't surprise me a bit if the shakeups of the last few years have exacerbated them. Many people, when the pressure is on as hard as it's been, decide that the first thing they have to do is try to hang on to their job. Anything interesting and risky can wait until after the mortgage payment has cleared and the tuition checks have been written. The behaviors most associated with "Don't get laid off" are not the ones that are best associated with "Keep the company going", much less "Discover something new". That last set of behaviors, in fact, might be one of the first to go, along with the people who exemplify them.

Hoffman has an aggressively cynical take on the motives in other parts of large organizations - and while I wish I could say that he's completely wrong, there are indeed places - too many - that operate on these general principles:

. . .At the top, finance sets the strategic direction. The goal of finance, paramount to everything else, consists of keeping senior management in control of the company. Forget the blather about shareholder value, customers, the community and medicine for the people. Everyone outside the boardroom is the enemy. . .Reality for CFOs involves long-term product and business development approaches that would create several quarters of flat or negative earnings. In their doomsday scenario, that would prompt the board to replace management.

And that's the tricky part of capitalism. One of the philosophical reasons that I'm such a free-market kind of person is that I think that it works with human nature as it really is, without needing any magical-thinking schemes to suddenly transform or improve it. People tend to act in their own self-interest? Fine, let's use that to try to derive benefit for more than just one person at a time. But it goes without saying (or should) that not all self-interested actions can be so harvested, which is why I'll never be anything close to an anarcho-libertarian.

Philosophy aside, what we're seeing in some drug organizations is this sort of self-destruction. The fix they find themselves in leads to behavior that makes the problems worse, or at best does little to overcome them. This, taken down to its individual basis, is what Hoffman's piece is arguing. And although his editorial can also be fairly characterized as a bitter rant, that doesn't mean it isn't true. Or at least more true than it should be.

Comments (37) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

September 15, 2011

Targets to Avoid (Or That We Wish We Had)

Email This Entry

Posted by Derek

A discussion with colleagues recently got me to wondering about this useful (albeit grim) question: what area of drug discovery over the last twenty years would you say has taken up the most resources and returned the least value? I'm thinking more of disease/therapeutic areas, but other nominations are welcome, of course.

My own candidate is the nuclear receptor field, where some of that time and effort was mine. When I think of how enthusiastic I was ten years ago, how impatient I was to get in there and start up a big effort to really understand what was going on, to dig into the details and come up with drug candidates - and then when I think of what happened to the people who actually did that, well, it's food for thought. For those outside the field, a vast amount of effort and treasure was spent trying to work out a lot of insanely complex biology, and well, not much has ever emerged. Things went toward the clinic and never got there. Things went into the clinic and never came back out. Some went all the way to the FDA and were turned down.

So that's my nominee. I ask this question not just to wallow in misery and schadenfreude, but to see if there are some trends that we can spot, so as to avoid such things the next time they come down the chute. Given the state of the industry, the last thing we need is another gigantic sinkhole of time and money, so a bit of early warning would be welcome.

Comments (49) + TrackBacks (0) | Category: Drug Industry History

September 13, 2011

Fifty Years of Med-Chem Molecules: What Are They Telling Us?

Email This Entry

Posted by Derek

I wanted to send people to this 50-year retrospective in J. Med. Chem.. It's one of those looks through the literature, trying to see what kinds of compounds have actually been produced by medicinal chemists. The proxy for that set is all the compounds that have appeared in J. Med. Chem. during that time, all 415, 284 of them.

The idea is to survey the field from a longer perspective than some of the other papers in this vein, and from a wider perspective than the papers that have looked at marketed drugs or structures reported as being in the clinic. I'm reproducing the plot for the molecular weights of the compounds, since it's an important measure and representative of one of the trends that shows up. The prominent line is the plot of mean values, and a blue square shows that the mean for that period was statistically different than the 5-year period before it (it's red if it wasn't). The lower dashed line is the median. The dotted line, however, is the mean for actual launched drugs in each period with a grey band for the 95% confidence interval around it.
Molecular%20weight.png
As a whole, the mean molecular weight of a J. Med. Chem. has gone up by 25% over the 50-year period, with the steeped increase coming in 1990-1994. "Why, that was the golden age of combichem", some of you might be saying, and so it was. Since that period, though, molecular weights have just increased a small amount, and may now be leveling off. Several other measures show similar trends.

Some interesting variations show up: calculated logP, for example, was just sort of bouncing around until 1985 or so. Then from 1990 on, it started a steep increase, and it's hard to tell if that's leveling off or not even now. At any rate, the clogP of the literature compounds has been higher than that of the launched drugs since the mid-1980s. Another point of interest is the fraction of the molecules with tetrahedral carbons. What you find is that "flatness" in the literature compounds held steady until the early 1990s (by which point it was already disconnected from the launched drugs), but since then it's gotten even worse (and further away from the set of actual drugs). This, as the authors speculate, is surely due to metal-catalyzed couplings taking over the world - you can see the effect right in front of you, and so far, the end is not in sight.

Those two measures are the ones moving the most outside the range of marketed drugs. And despite my shot at early combichem molecules, it's also clear that publication delays mean that some of these things were already happening even before that technique became fashionable (although it certainly revved up the trends). Actually, if you want to know When It Changed in medicinal chemistry, you have to go earlier:

It is worth noting that these trends seemed to accelerate in the mid-1980s, indicating that some change took place in the early 1980s. The most likely explanations for an upward change in the early 1980s (before the age of combinatorial chemistry or high-throughput screening) seem to be advances in molecular biology, i.e., understanding of receptor subtypes leading to concerns about specificity; target-focused drug design and its corresponding one-property-at-a-time optimization paradigm (possibly exacerbated by structural biology); and improvements in technologies which enabled the synthesis and characterization of more complex molecules.

Target-based drug design, again. I'm really starting to wonder about this whole era. And if you'd told me back in, say, 1991 about these doubts that I'd be having, I'd have been completely dumbfounded. But boy, do I ever have them now. . .

Comments (26) + TrackBacks (0) | Category: Chemical News | Drug Industry History | Life in the Drug Labs

September 6, 2011

A Dish Best Served Cold

Email This Entry

Posted by Derek

And the well-known chem-blogger Milkshake knows how to serve it. See his latest post here. It doesn't qiote make up for having one's company bought out, having everyone moved and fired and hosed around, and having to go to court for the severance package that you were promised. . .but you have to take your pleasures where you can.

Comments (37) + TrackBacks (0) | Category: Drug Industry History | Patents and IP

September 1, 2011

GlaxoSmithKline Reviews the Troops

Email This Entry

Posted by Derek

Several readers sent along this article from the Times of London (via the Ottawa Citizen) on GlaxoSmithKline's current research setup. You can tell that the company is trying to get press for this effort, because otherwise these are the sorts of internal arrangements that would never be in the newspapers. (The direct quotes from the various people in the article are also a clear sign that GSK wants the publicity).

The piece details the three-year cycle of the company's Drug Performance Units (DPUs), which have to come and justify their existence at those intervals. We're just now hitting the first three-year review, and as the article says, not all the DPUs are expected to make it through:

In 2008, the company organized its scientists into small teams, some with just a handful of staff, and set them to work on different diseases. At the time, every one of these drug performance units (DPUs) had to plead its case for a slice of Glaxo’s four-billion-pound research and development budget. Three years on and each of the 38 DPUs is having to plead its case for another dollop of funding to 2014. . .

. . .Such a far-reaching overhaul of a fundamental part of the business has proved painful to achieve. Witty said: “If you look across research and development at Glaxo, I would say we are night-and-day different from where we were three, four, five years ago. It has been a tough period of change and challenge for people in the company. When you go through that period, of course there are moments when morale is challenged and people are worried about what will happen.”

But he said it has been worth the upheaval: “The research and development organization has never been healthier in terms of its performance and in terms of its potential.”

I'm not in a position to say whether he's right or not. One problem (mentioned by an executive in the story) is that three years isn't really long enough to say whether things are working out or not. That might give you a read on the number of preclinical projects, whether that seems to be increasing or not. But that number is notoriously easy to jigger around - just lower the bar a bit, and your productivity problem is solved, on paper. The big question is the quality of those compounds and projects, and that takes a lot more time to evaluate. And then there's the problem that the extent that you can actually improve that quality may still not be enough to really affect your clinical failure rates much, anyway, depending on the therapeutic area.

Is this a sound idea, though? It could be - asking projects and therapeutic areas to justify their existence every so often could keep them from going off the rails and motivate them to produce results. Or, on the other hand, it could motivate them to tell management exactly what they want to hear, whether that corresponds to reality or not. All of these tools can cut in both directions, and I've no idea which way the blades are moving at GSK.

There's another consideration that applies to any new management scheme. How long will GSK give this system? How many three-year cycles will be needed to really say if it's effective, and how many will actually be run? Has any big drug company kept its R&D arrangements stable for as long as nine years, say, in recent history?

Comments (35) + TrackBacks (0) | Category: Drug Development | Drug Industry History

August 29, 2011

Chinese Pharma: No Shortage of Ambition, Anyway

Email This Entry

Posted by Derek

When does China take the next step in drug research? They already have a huge contract research industry, and they have branches of many of the major pharma companies. But when does a Chinese startup, doing its own research with its own people in China, develop its own international-level drug pipeline? (We'll leave aside the problem that not even all the traditional drug companies seem to be able to do that these days). It still seems clear that we're eventually going to have a Chinese Merck, or a Chinese Novartis or what have you - a company to join North America, Western Europe, and Japan in the big leagues. The Chinese government, especially, would seem to find this idea very appealing.

Opinions differ, to put it mildly, about how far away this prospect is. But Chemical and Engineering News is out with an article on homegrown Chinese research that explores just this sort of question. But you run into passages like this:

In a meeting room in a building resembling a residential home in Shanghai’s Zhangjiang Hi-Tech Park, Li Chen and John Choi describe the business plan of their new company. Called Hua Medicine, the firm will launch breakthrough drugs within four years, they predict. Hua will manufacture the compounds and sell them with its own sales force. It will also license its internally developed drugs to multinational companies.

Yet right now, Hua is a modest operation that employs eight people. Hua doesn’t have an R&D lab yet, let alone a manufacturing facility. It operates in a loaned building formerly used by the administrators of the industrial park...

It can be easy to dismiss such ambitious business plans as simply talk aimed at gullible investors or government officials handing out subsidies. Except several start-ups are led by people who have long track records of success. Moreover, the money financing these start-ups comes not from relatives and friends, but from savvy investors knowledgeable about the drug industry.

Well. . .yeah. Let me join those who dismiss business plans that are as ambitious as that one. The way I understand the drug industry, if you're planning on launching a breakthrough drug within four years, you must have that drug in your hand right now, and it has to have had a lot of preclinical work done on it already (and in most therapeutic areas, it needs to have already hit the clinic). And note, these guys aren't talking about their one pet compound, they're talking about launching drugs, plural. Drugs that they discover, develop, manufacture and sell. And they have 8 people and no labs.

No, something is off here. I get the same feeling from this that I get from a lot of leapfrog-the-world plans, the feeling that something just isn't quite right and that the world doesn't allow itself to be hopped over on such a deliberate schedule. Thoughts?

Comments (47) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

August 10, 2011

The Economics of the Drug Industry: Big Can't Be Big Enough?

Email This Entry

Posted by Derek

I wanted to extract and annotate a comment of Bernard Munos' from the most recent post discussing his thoughts on the industry. Like many of the ones in that thread, there's a lot inside it to think about:

(Arthur) De Vany has shown that the movie industry has developed clever tools (e.g., adaptive contracts) to deal with (portfolio uncertainty). That may come to pharma too, and in fact he is working on creating such tools. In the meantime, one can build on the work of Frank Scherer at Harvard, and Dietmar Harhoff. (Andrew Lo at MIT is also working on this). Using simulations, they have shown that traditional portfolio management (as practiced in pharma) does achieve a degree of risk mitigation, but far too little to be effective. In other words, because of the extremely skewed probability distributions in our industry, the residual variance, after you've done portfolio management, is large enough to put you out of business if you hit a dry spell. That's why big pharma is looking down patent cliffs that portfolio management was meant to avoid. Scherer's work also shows that the broader the pipeline, the better the risk mitigation. So we know directionally where to go, but we need more work to estimate the breadth of the pipeline that is needed to get risk under control. Pfizer's example, however, gives us a clue. With nearly $9 billion in R&D spend, and a massive pipeline, they were unable to avoid patent cliffs. If they could not do it, chances are that no single pharma company can create internally a pipeline that is broad enough to tame risk. . .

That's a disturbing thought, but it's likely to be correct. Pfizer has not, I think it's safe to say, achieved any sort of self-sustaining "take-off" into a world where it discovers enough new drugs to keep its own operations running steadily. And this, I think, was the implicit promise in all that merger and acquisition growth it undertook. Just a bit bigger, just a bit broader, and those wonderful synergies and economies of scale would kick in and make everything work out. No, we're not quite big enough yet to be sure that we're going to have a steady portfolio of big, profitable drugs, but this next big acquisition? Sure to do the trick. We're so close.

And this doesn't even take into account the problems with returns on research not scaling with size (due to the penalties of bureaucracy and merger uncertainty, among other factors). Those have just made the problems with the strategy apparent more quickly - but even if Pfizer's growth had gone according to plan, and they'd turned into that great big (but still nimble and innovative!) company of their dreams, it might well still not have been enough. So here's the worrisome thesis: What size drug portfolio is big enough to avoid too high a chance of ruin? Bigger than any of us have.

Here's de Vany's book on the economics of Hollywood, for those who are interested. That analogy has been made many times, and there's a lot to it. Still, there are some key divergences: for one thing, movies are more of a discretionary item than pharmaceuticals are (you'd think). People have a much different attitude towards their physical well-being than they have towards their entertainment options. Then again, movies don't have to pass the FDA; the customers get to find out whether or not they're efficacious after they've paid their money.

On the other hand, copyright lasts a lot longer than a patent does (although it's a lot easier along the way to pirate a movie than it is to pirate a drug). And classic movies, as emotional and aesthetic experiences, don't get superseded in quite the same way that classic pharmaceuticals do. Line extension is much easier in the movie business, where people actually look forward to some of the sequels. Then there's all the ancillary merchandise that a blockbuster summer movie can spin off - no one's making Lipitor collectibles (and if I'm wrong about that, I'd prefer not to know).

Comments (47) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why

August 8, 2011

Read the Comments

Email This Entry

Posted by Derek

Just wanted to point out to anyone who's not reading the comments here that the ones to this post are of extremely high quality. If you want to hear the thoughts of a lot of intelligent, experienced people on what's wrong with the drug industry and what might be done to fix it, have a look.

Comments (9) + TrackBacks (0) | Category: Drug Industry History

August 5, 2011

Bernard Munos Rides Again

Email This Entry

Posted by Derek

I've been meaning to link to Matthew Herper's piece on Bernard Munos and his ideas on what's wrong with the drug business. Readers will recall several long discussions here about Munos and his published thoughts (Parts one, two, three and four). A take-home message:

So how can companies avoid tossing away billions on medicines that won’t work? By picking better targets. Munos says the companies that have done best made very big bets in untrammeled areas of pharmacology. . .Munos also showed that mergers—endemic in the industry—don’t fix productivity and may actually hurt it. . . What correlated most with the number of new drugs approved was the total number of companies in the industry. More companies, more successful drugs.

I should note that the last time I saw Munos, he was emphasizing that these big bets need to be in areas where you can get a solid answer in the clinic in the shortest amount of time possible - otherwise, you're really setting yourself up with too much risk. Alzheimer's, for example, is a disease that he was advising that drug developers basically stay away from: tricky unanswered medical questions, tough drug development problems, followed up by big huge long expensive clinical trials. If you're going to jump into a wild, untamed medical area (as he says you should), then pick one where you don't have to spend years in the clinic. (And yes, this would seem to mean a focus on an awful lot of orphan diseases, the way I look at it).

But, as the article goes on to say, the next thought after all this is: why do your researchers need to be in the same building? Or the same site? Or in the same company? Why not spin out the various areas and programs as much as possible, so that as many new ideas get tried out as can be tried? One way to interpret that is "Outsource everything!" which is where a lot of people jump off the bus. But he's not thinking in terms of "Keep lots of central control and make other people do all your grunt work". His take is more radical:

(Munos) points to the Pentagon’s Defense Advanced Research Projects Agency, the innovation engine of the military, which developed GPS, night vision and biosensors with a staff of only 140 people—and vast imagination. What if drug companies acted that way? What areas of medicine might be revolutionized?

DARPA is a very interesting case, which a lot of people have sought to emulate. From what I know of them, their success has indeed been through funding - lightly funding - an awful lot of ideas, and basically giving them just enough money to try to prove their worth before doling out any more. They have not been afraid of going after a lot of things that might be considered "out there", which is to their credit. But neither have they been charged with making money, much less reporting earnings quarterly. I don't really know what the intersection of DARPA and a publicly traded company might look like (the old Bell Labs?), or if that's possible today. If it isn't, so much the worse for us, most likely.

Comments (114) + TrackBacks (0) | Category: Alzheimer's Disease | Business and Markets | Clinical Trials | Drug Development | Drug Industry History | Who Discovers and Why

August 3, 2011

A Former Pfizer Executive Finally Trashes Pfizer's Strategy

Email This Entry

Posted by Derek

A number of readers have noted this piece by John LaMattina in Nature Reviews Drug Discovery. He is, of course, a former head of R&D at Pfizer, which makes the title of the article something of an attention-getter: "The impact of mergers on Pharmaceutical R&D". Pfizer, for those of you just returning from a near-lightspeed trip to Alpha Centauri and still adjusting to the effects of relativistic time dilation, has been the Undisputed King of Pharma Mergers over the last ten to fifteen years, growing ever larger and larger in a way that no drug company ever had before. So how has this worked out?

". . .In this article, it is argued that although mergers and acquisitions in the pharmaceutical industry might have had a reasonable short-term business rationale, their impact on the R&D of the organizations involved has been devastating.

Lest anyone think that he's trying to make excuses for his former employer, LaMattina explicitly advances Pfizer as an example of what he's talking about, going over the company's merger and acquisition history in detail, including research site closure and layoffs. How, he asks, are we supposed to discover new drugs in the face of such cutbacks? And what has been the effect on the scientific health of the industry to have so many fewer organizations there to work on new ideas as they come along?

Good questions. The reaction to LaMattina himself asking them, though, has been varied. My first thought is that I agree with his point of view right down to the ground, and have been publicly inveighing against Pfizer-style mergers for over ten years now for the exact same reasons that he details. (Early next year, in fact, will mark the ten-year anniversary of this blog, which hardly seems possible). All such protests have done nothing, nothing at all, as far as I can tell. Pfizer, up through its acquisition of Wyeth, has getting bigger, buying more companies because it needs their pipelines because now it's so big, slashing and burning these organzations after buying them, and then turning around and buying someone else because now its pipeline needs shoring up, because for some obscure reason people haven't been discovering as many drugs as they used to. Yep, that's about the sorry size of it.

Another reaction, though, has been "How dare someone from Pfizer say that mergers aren't a good idea? Now he tells us!" And while I can understand that, I think that you have to realize that in a company the size of Pfizer, the head of R&D is not perhaps in as exalted a decision-making position as you might imagine. LaMattina alludes to this here:

"Indeed, R&D seems to be especially vulnerable to the negative impact of mergers and acquisitions. Having a sense of how mergers occur in R&D organizations is helpful for understanding this impact. R&D organizations will be the last part of the companies to begin merger discussions before regulatory approval because of the commercial sensitivity of the pipeline and the intellectual property of the company. . .

I would say that in many of these cases, the job of the R&D executives has been to roll over and take it once the higher-ups have decided an acquisition is going to happen. "Your job is to make this work - and if you don't want to do it, we'll find someone that does". After reading that alarming Fortune piece on the goings-on in the upper ranks of Pfizer, I find this view particularly believable. (And I would find LaMattina's view on the events in that article extremely interesting, although I doubt we'll ever hear them).

So, although I don't want to put words in anyone's mouth, my take is that LaMattina finds his part in Pfizer's M&A activities to be regrettable, and that he's now advancing the arguments against them - arguments that never gained any traction inside Pfizer. His own book skirted the topic - the word "mergers" only appears twice in the text, as far as Google Books can tell. But he's not skirting it any more.

Comments (55) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

August 2, 2011

Merck Moving Research From Rahway?

Email This Entry

Posted by Derek

I've heard from more than one person that Merck has decided to move most discovery research out of Rahway (in favor of the former Schering-Plough site in Kenilworth). Details are welcome in the comments from those with better information. That news does bring on end-of-an-era feelings, since they've been doing medicinal chemistry in Rahway for a long, long time. Kenilworth - well, I joined Schering-Plough when it was still in Bloomfield, and I remember the Kenilworth building site when it was a huge hole in the ground. We migrated into it (the building, not the hole) at the end of 1992, in a massive moving job that involved several convoys of 18-wheel trucks going down a partially-closed-off Garden State Parkway in the middle of the night.

The move had to be done; Bloomfield was at the limits of its capacity. And while it was nice to move into a completely new facility, I realized as time went on that Bloomfied had had charms of its own that I hadn't recognized at the time. But I left Kenilworth in 1997, when the building was comparatively new, so no doubt it's acquired some character by now. Do they still have the stark white socialist-realist statue of Sir Derek Barton down in the lobby?

Comments (23) + TrackBacks (0) | Category: Drug Industry History | Drug Industry History

Merck Moving Research From Rahway?

Email This Entry

Posted by Derek

I've heard from more than one person that Merck has decided to move most discovery research out of Rahway (in favor of the former Schering-Plough site in Kenilworth). Details are welcome in the comments from those with better information. That news does bring on end-of-an-era feelings, since they've been doing medicinal chemistry in Rahway for a long, long time. Kenilworth - well, I joined Schering-Plough when it was still in Bloomfield, and I remember the Kenilworth building site when it was a huge hole in the ground. We migrated into it (the building, not the hole) at the end of 1992, in a massive moving job that involved several convoys of 18-wheel trucks going down a partially-closed-off Garden State Parkway in the middle of the night.

The move had to be done; Bloomfield was at the limits of its capacity. And while it was nice to move into a completely new facility, I realized as time went on that Bloomfied had had charms of its own that I hadn't recognized at the time. But I left Kenilworth in 1997, when the building was comparatively new, so no doubt it's acquired some character by now. Do they still have the stark white socialist-realist statue of Sir Derek Barton down in the lobby?

Comments (23) + TrackBacks (0) | Category: Drug Industry History | Drug Industry History

July 29, 2011

2011 Drug Approvals Are Up: We Rule, Right?

Email This Entry

Posted by Derek

I've been meaning to comment on this article from the Wall Street Journal - the authors take a look at the drug approval numbers so far this year, and speculate that the industry is turning around.

Well, put me in the "not so fast" category. And I have plenty of company there. Neither Bruce Booth (from the venture capital end), John LaMattina (ex-Pfizer R&D head) nor Matthew Herper at Forbes are buying it either.

One of the biggest problems with the WSJ thesis is that most of these drugs have been in development for longer than the authors seem to think. Bruce Booth's post goes over this in detail, and he's surely correct that these drugs were basically all born in the 1990s. Nothing that's changed in the research labs in the last 5 to 10 years is likely to have significantly affected their course; we're going to have to wait several more years to see any effects. (And even then it's unlikely that we're going to get any unambiguous signals; there are too many variables in play). That, as many people have pointed out over the years, is one of the trickiest parts about drug R&D: the timelines are so long and complex that it's very hard to assign cause and effect to any big changes that you make. If your car only responds to the brake pedal and steering wheel a half hour after you touch them, how can you tell if that fancy new GPS you bought is doing you any good?

Comments (8) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Press Coverage | Regulatory Affairs

July 28, 2011

The Secret History of Pfizer

Email This Entry

Posted by Derek

Here's a fascinating account at Fortune of the departure of Jeff Kindler as Pfizer's CEO. The magazine says that they interviewed over 100 people to round up the details, but some of these meetings only feature four or five people in a room, so that narrows things down a bit. It's also a back-room history of Pfizer over the last ten or fifteen years, and there's a lot of high-level political stuff that wasn't widely known at the time:

McKinnell kept boosting R&D budgets, maintaining Pfizer's "shots on goal" approach -- the more compounds you explored, in theory, the more drugs you'd generate. But drugs can take a full decade to be developed and approved, and nothing big would be ready for years.

So McKinnell fell back on the refuge of the desperate pharma CEO: In July 2002 he announced the acquisition of Pharmacia, the industry's seventh-largest company, for $60 billion in stock. But even as Pfizer struggled to digest this latest meal, McKinnell seemed to spend less and less time at headquarters, becoming head of industry trade groups, funding an institute in Africa to combat AIDS, even writing a book about reforming health care.

That left a power vacuum, and Bill Steere, the former CEO, seemed more than willing to fill it. . ."He says almost nothing," says a person familiar with Pfizer's board. "But people look to him to see how he nods and how he moves, because he knows the company better than anyone."

With Pfizer no longer soaring, internal squabbling intensified. Vexed by what he viewed as Steere's meddling, McKinnell even tried to terminate his consulting contract. Steere fended off that move. Support for him ran deep on the board: Later, when Steere turned 72, the mandatory retirement age for directors, the board raised it to 73 so he could stick around, then amended the provision again when he hit that limit.

Steere and McKinnell, former friends and colleagues, became mortal enemies. . .

Read the whole thing, if you're interested in either Pfizer or the way that human beings behave at this level of a large corporation: anonymous letters, secret meetings, all varieties of intrigue. 14th-century Florence can offer little more in the way of power politics. There are those who swim in such waters like fish, but I've devoted time and effort trying to stay away.

Comments (25) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

July 7, 2011

Phenotypic Screening For the Win

Email This Entry

Posted by Derek

Here's another new article in Nature Reviews Drug Discovery that (for once) isn't titled something like "The Productivity Crisis in Drug Research: Hire Us And We'll Consult Your Problems Away". This one is a look back at where drugs have come from.

Looking over drug approvals (259 of them) between 1999 and 2008, the authors find that phenotypic screens account for a surprising number of the winners. (For those not in the business, a phenotypic screen is one where you give compounds to some cell- or animal-based assay and look for effects. That's in contrast to the target-based approach, where you identify some sort of target as being likely important in a given disease state and set out to find a molecule to affect it. Phenotypic screens were the only kinds around in the old days (before, say, the mid-1970s or thereabouts), but they've been making a comeback - see below!)

Out of the 259 approvals, there were 75 first-in-class drugs and 164 followers (the rest were imaging agents and the like). 100 of the total were discovered using target-based approaches, 58 through phenotypic approaches, and 18 through modifying natural substances. There were also 56 biologics, which were all assigned to the target-based category. But out of the first-in-class small molecules, 28 of them could be assigned to phenotypic assays and only 17 to target-based approaches. Considering how strongly tilted the industry has been toward target-based drug discovery, that's really disproportionate. CNS and infectious disease were the therapeutic areas that benefited the most from phenotypic screening, which makes sense. We really don't understand the targets and mechanisms in the former, and the latter provide what are probably the most straightforward and meaningful phenotypic assays in the whole business. The authors' conclusion:

(this) leads us to propose that a focus on target-based drug discovery, without accounting sufficiently for the MMOA (molecular mechanism of action) of small-molecule first-in-class medicines, could be a technical reason contributing to high attrition rates. Our reasoning for this proposal is that the MMOA is a key factor for the success of all approaches, but is addressed in different ways and at different points in the various approaches. . .

. . .The increased reliance on hypothesis-driven target-based approaches in drug discovery has coincided with the sequencing of the human genome and an apparent belief by some that every target can provide the basis for a drug. As such, research across the pharmaceutical industry as well as academic institutions has increasingly focused on targets, arguably at the expense of the development of preclinical assays that translate more effectively into clinical effects in patients with a specific disease.

I have to say, I agree (and have said so here on the blog before). It's good to see some numbers put to that belief, though. This, in fact, was the reason why I thought that the NIH funding for translational research might be partly spent on new phenotypic approaches. Will we look back on the late 20th century/early 21st as a target-based detour in drug discovery?

Comments (36) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History

July 2, 2011

Innovation and Return (Europe vs. the US)

Email This Entry

Posted by Derek

Here's another look at the productivity problems in drug R&D. The authors are looking at attrition rates, development timelines, targets and therapeutic areas, and trying to find some trends to explain (or at least illuminate) what's been going on.

Their take? Attrition rates have been rising at all phases of drug development, and most steeply in Phase III. (This sounds right to me). Here are their charts:
Attrition%20rates.png
And when they look at where the drug R&D efforts have been going, they find that comparatively more time and money has been spent on targets with lower probability of success. That means (among other things) more oncology, Alzheimer's, arthritis, Parkinson's et al. and less cardiovascular and anti-HIV.

That makes sense, too, in a paradoxical way. If we were to get drugs in those areas, the expected returns would be higher than if we found them in the well-established ones. The regulatory barriers would be smaller, the competition thinner, the potential markets are enthusiastic about new therapies - everything's lined up. If you can find a drug, that is. The problem is the higher failure rates. We knew that going in, of course, but the expectation was that the greater rewards would cancel that out. But what if they don't? What if, for a protracted period, there are no rewards at all?

The paper also has a very interesting analysis of European firms versus US ones. Instead of looking at where companies might be headquartered, the authors used the addresses of the inventors on patent filings as a better location indicator. Over 18,000 projects started by companies or public research organizations between 1990 and 2007 were examined, and they found:

Although at a first glance, European organizations seem to have higher success rates compared with US organizations, after controlling for the larger share of biotechnology companies and PROs in the United States and for differences in the composition of R&D portfolios, there is no significant gap between European and US organizations in this respect. Unconditional differences (that is, differences arising when no controls are taken into account) are driven by the higher propensity of US organizations to focus on novel R&D methodologies and riskier therapeutic endeavours. . .as an average US organization takes more risk, when successful, they attain higher price premiums than the European organizations.

The other take-home has to do with "me-too" compounds versus first-in-class ones, and is worth considering:

". . .both private and public payers discourage incremental innovation and investments in follow-on drugs in already established therapeutic classes, mostly by the use of reference pricing schemes and bids designed to maximize the intensity of price competition among different molecules. Indeed, in established markets, innovative patented drugs are often reimbursed at the same level as older drugs. As a consequence, R&D investments tend to focus on new therapeutic targets, which are characterized by high uncertainty and difficulty, but lower expected post-launch competition. Our empirical investigation indicates that this reorienting of investments accounts for most of the recent decline in productivity in pharmaceutical R&D, as measured in terms of attrition rates, development times and the number of NMEs launched."

So, rather than being in trouble for not trying to be innovative enough, according to these guys, we're in trouble for innovating too much. . .

Comments (26) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

July 1, 2011

The Histamine Code, You Say?

Email This Entry

Posted by Derek

I've been meaning to link to John LaMattina's blog for some time now. He's a former R&D guy (and author of Drug Truths: Dispelling the Myths About Pharma R & D, which I reviewed here for Nature Chemistry), and he knows what he's talking about when it comes to med-chem and drug development.

Here he takes on the recent "Scientists Crack the Histamine Code" headlines that you may have seen this week. Do we have room, he wonders, for a third-generation antihistamine, or not?

Comments (17) + TrackBacks (0) | Category: Biological News | Drug Industry History

June 28, 2011

Drug R&D Spending Now Down (But Look at the History)

Email This Entry

Posted by Derek

I hate to be such a shining beacon of happiness today, but this news can't very well be ignored, can it? For the first time ever, total drug R&D spending seems to have declined:

The global drug industry cut its research spending for the first time ever in 2010, after decades of relentless increases, and the pace of decline looks set to quicken this year.

Overall expenditure on discovering and developing new medicines amounted to an estimated $68 billion last year, down nearly 3 percent on the $70 billion spent in both 2008 and 2009, according to Thomson Reuters data released on Monday.

The fall reflects a growing disillusionment with poor returns on pharmaceutical R&D. Disappointing research productivity is arguably the biggest single factor behind the declining valuations of the sector over the past decade.

This is not good - although, to be sure, we've had plenty of warning that this day would be coming. But looking at it from another perspective, you might wonder what's taken so long. Matthew Herper has a piece up highlighting the chart below, from the Boston Consulting Group. It plots new drugs versus R&D spending in constant dollars, and if you're wondering what the Good Old Days looked like, here they are. Or were:
R%26D%20constant%20dollar%20graph.png
What's most intriguing to me about this graph is the way it seems to validate the "low-hanging fruit" argument. This looks like the course of an industry that has, from the very beginning of its modern era, been finding it steadily, relentlessly harder to mine the ore that it runs on. But that analogy leaves out another key factor that makes that line go down: good drugs don't go away. They just go generic, and get cheaper than ever. You can also interpret this graph as showing the gradual buildup of cheap, effective generics for a number of major conditions (cardiovascular, in particular).

There's one other factor that ties in with those thoughts - the therapeutic areas that we've been able to address. Look at that spike in the 1990s, labeled PDUFA and HIV. Part of that jump is, as a colleague theorized with me just this morning, the fact that a completely new disease appeared. And it was one that, in the end, we could do something about - as opposed to, say, Alzheimer's. So if you want to be completely evil about it, then the Huey Lewis model of fixing pharma has it wrong: we don't need a new drug. We need a new disease. Or several.

Well, that's clearly not the way to look at it. I don't actually think that we need to add to the list of human ailments; it's long enough already. But given all the factors listed (and the ever-tightening regulatory/safety environment, on top of them), another colleague of mine looked at this chart and asked if we ever could have expected it to look any different. Could that line go anywhere else but down? The promise of things like the genomics frenzy was, I think, that it would turn things around (and that hope still lives on in the heart of Francis Collins), even though some people argue that it did the reverse.

Comments (60) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

May 31, 2011

Extreme Outsourcing

Email This Entry

Posted by Derek

My local NPR station had this report on this morning, on one-person drug companies. Can't outsource much more than that!

Here are the two companies profiled: LipimetiX and Deuteria. The former is using helical peptides to affect lipoprotein clearance, and the latter is (as you'd guess) in the deuterated-drug game, which I've most recently blogged on here. (That one's run by Sheila DeWitt, who used to work down the hall from me in grad school 25 years ago). And there are several other outfits that they could have mentioned - some of them are not quite down to one person, but you can count the employees on your fingers. In all of these cases, everything is being contracted out.

There are downsides, of course. For one thing, these are, almost by necessity, single-drug companies. It's enough of a strain just getting one project through under those conditions, let alone running a whole portfolio. So the risk is higher, given the typical failure rates in this line of work. And you have to trust your contractors, naturally. That's a bit easier to do in the Boston area (and a few other places), since you can get a lot of work sourced locally. That doesn't make it as much of a Bargain, Bargain, Bargain as it might be overseas, but at least you can drop in and see how things are going.

Another thing the NPR piece didn't address was where these projects come from. Many of them, I'd guess, are abandoned efforts from other companies that still have some possibilities. Those and the up-from-academia ideas probably take care of the whole list, wouldn't you think? Has anyone heard of one of these virtual-company ideas where the lead compound came from some sort of outsourced screen? And is an outsourced screen even possible? Now there's a business idea. . .

Comments (24) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

May 26, 2011

Pfizer's Brave New Med-Chem World

Email This Entry

Posted by Derek

OK, here's how I understand the way that medicinal chemistry now works at Pfizer. This system has been coming on for quite a while now, and I don't know if it's been fully rolled out in every therapeutic area yet, but this seems to be The Future According to Groton:

Most compounds, and most actual chemistry bench work, is apparently going to be done at WuXi (or perhaps other contract houses?) Back here in the US, there will be a small group of experienced medicinal chemists at the bench, who will presumably be doing the stuff that can't be easily shipped out (time-critical, difficult chemistry, perhaps even IP-critical stuff, one wonders?) But these people are not, as far as I can tell, supposed to have ideas of their own.

No, ideas are for the Drug Designers, which is where the rest of Pfizer's remaining medicinal chemistry head count are to be found. These are the people who keep trac of the SAR, decided what needs to be made next, and tell the folks in China to make it. It's presumably their call, what to send away for and what to do in-house, but one gets the sense that they're strongly encouraged to ship as much stuff out as possible. Cheaper that way, right? And it's not like there's a whole lot of stateside capacity, anyway, at this point.

What if someone working in the lab has (against all odds) their own thoughts about where the chemistry should go next? I presume that they're going to have to go and consult a Drug Designer, thereby to get the official laying-on of hands. That process will probably work smoothly in some cases, but not so smoothly in others, depending on the personalities involved.

So we have one group of chemists that are supposed to be all hands and no head, and one group that's supposed to be all head and no hands. And although that seems to me to be carrying specialization one crucial step too far, well, it apparently doesn't seem that way to Pfizer's management, and they're putting a lot of money down on their convictions.

And what about the whole WuXi/China angle? The bench chemists there are certainly used to keeping their heads down and taking orders, for better or worse, so that won't be any different. But running entire projects outsourced can be a tricky business. You can end up in a situation where you feel as if you're in a car that only allows you to move the steering wheel every twenty minutes or so. Ah, a package has arrived, a big bunch of analogs that aren't so relevant any more, but what the heck. And that last order has to be modified, and fast, because we just got the assay numbers back, and the PK of the para substituted series now looks like it's not reproducing. And we're not sure if that nitrogen at the other end really needs to be modified any more at this point, but that's the chemistry that works, and we need to keep people busy over there, so another series of reductive aminations it is. . .

That's how I'm picturing it, anyway. It doesn't seem like a particularly attractive (or particularly efficient) picture to me, but it will at least appear to spend less money. What comes out the other end, though, we won't know for a few years. And who knows, someone may have changed their mind by then, anyway. . .

Comments (114) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Life in the Drug Labs

May 24, 2011

Maybe It Really Is That Hard?

Email This Entry

Posted by Derek

Here's an interesting note from the Wall Street Journal's Health Blog. I can't summarize it any better than they have:

"When former NIH head Elias Zerhouni ran the $30 billion federal research institute, he pushed for so-called translational research in which findings from basic lab research would be used to develop medicines and other applications that would help patients directly.

Now the head of R&D at French drug maker Sanofi, Zerhouni says that such “bench to bedside” research is more difficult than he thought."

And all across the industry, people are muttering "Do tell!" In fairness to Zerhouni, he was, in all likelihood, living in sort of a bubble at NIH. There probably weren't many people around him who'd ever actually done this sort of work, and unless you have, it's hard to picture just how tricky it is.

Zerhouuni is now pushing what he calls an "open innovation" model for Sanofi-Aventis. The details of this are a bit hazy, but it involves:

". . .looking for new research and ideas both internally and externally — for example, at universities and hospitals. In addition, the company is focusing on first understanding a disease and then figuring out what tools might be effective in treating it, rather than identifying a potential tool first and then looking for a disease area in which it could be helpful."

Well, I don't expect to see Sanofi's whole strategy laid out in the press, but that one doesn't even sound as impressive as it sounds. The "first understanding a disease" part sounds like what Novartis has been saying for some time now - and honestly, it really is one of the things that we need, but that understanding is painfully slow to dawn. Look at, oh, Alzheimer's, to pick one of those huge unmet medical needs that we'd really like to address in this business.

With a lot of these things, if you're going to first really understand them, you could have a couple of decades' wait on your hands, and that's if things go well. More likely, you'll end up doing what we've been doing: taking your best shot with what's known at the moment and hoping that you got something right. Which leads us to the success rates we have now.

On the other hand, maybe Zerhouni should just call up Marcia Angell or Donald Light, so that they can set him straight on the real costs of drug R&D. Why should we listen to a former head of the NIH who's now running a major industrial research department, when we can go to the folks who really know what they're talking about, right? And I'd also like to know what he thinks of Francis Collins' plan for a new NIH translational research institute, too, but we may not get to hear about that. . .

Comments (34) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Drug Industry History

May 16, 2011

Ups and Downs

Email This Entry

Posted by Derek

I was thinking the other day that I never remembered hearing the phrase "Big Pharma" when I first got a job in this business (1989). Now I have some empirical proof, thanks to the Google Labs Ngram Viewer, that the phrase has only come into prominence more recently. (Fair warning: you can waste substantial amounts of time messing with this site). Here's the incidence rate of "big pharma" in English-language books from 1988 to 2000.big%20pharma%20graph%2Cjpg.jpg
It comes from nowhere, blips to life in 1992, doesn't even really get off the baseline until 1994 or so, and then takes off. (The drops in 2005 and 2008 remain unexplained - did the log phase of its growth end in 2004?)

Update: that graph holds for the uncapitalized version of the phrase. If you put the words in caps, you get the even more dramatic takeoff shown below:
Big%20Pharma%20cap.jpg

To be fair, though, there seems to have been a general rise in Big Pharma-related literature during that period. Try out this graph, comparing mentions of Merck, Pfizer, and Novartis since 1970. The last-named, of course, didn't even exist until the early 1990s, but they (like the others) have spent the time since then zipping right up, with no apparent end in sight. (Merck, especially - what's with those guys?) And what accounts for this? Business books? Investing guides? Speculation is welcome.

Note: the above paragraph was written before realizing that the Google Ngram search is case-sensitive - so, as was pointed out in the comments, I was picking up on people not caring about capitalization more than anything else. Below is the correct graph, with initial capitals in the search, and it makes more sense. Merck still is the king of book mentions, though, for all the coverage that Pfizer gets.
merck%20graph%20cap.jpg

I'll finish off with this one, using a longer time scale. Yes, folks, for better or worse, it appears that the phrase "organic chemistry" peaked out between book covers around 1950, and has been declining ever since. Meanwhile, "total synthesis" starting rising during the World War II era (penicillin?), and kept on moving up until a peak around 1980. Interestingly, things turned around in 2000 or so, and especially since 2003. And this can't be ascribed to some sort of general surge in chemistry publications - look at the "organic chemistry" line during the same period. Is there some other field that's adopted the phrase?
total%20synthesis%20graph.jpg

Comments (20) + TrackBacks (0) | Category: Drug Industry History | General Scientific News | The Scientific Literature

May 3, 2011

A Look Inside the Compound Collections

Email This Entry

Posted by Derek

Now here's a comparison that you don't get to see very often: how much do two large pharma compound collections overlap? There's a paper going into just that question in the wake of the 2006-2007 merger between Bayer and Schering AG. (By two coincidences, this paper is in the same feed as the one that I highlighted yesterday, and that merger is the one that closed my former research site out from under me).

Pre-merger, Bayer had over two million structures in its corporate collection, and Schering AG had just under 900,000. Both companies had undertaken recent library clean-up programs, clearing out undesirable compounds and adding both purchased and in-house diversity structures. Interestingly, it turns out that just under 50,000 structures were duplicated across both collections, about 1.5% of the total. Almost all of these duplicates were purchased compounds; only 2,000 of them had been synthesized in-house. And even most of those turned out to be from combichem programs or were synthetic intermediates - there was almost no overlap at all in submitted med-chem compounds.

Various measures of structural complexity and similarity backed up those numbers. The two collections were surprisingly different, which might well have something to do with the different therapeutic areas the two companies had focused on over the years. The Bayer compounds tended to run higher in molecular weight, rotatable bonds, and clogP, but then, a higher percentage of the Schering AG compounds were purchased with such filters already in place. As for undesirable structures, only about 2% of the Bayer collection and 1% of the Schering AG compounds were considered to be real offenders. I hope none of those were mine; I contributed quite a few compounds to the collection over the years, but they were, for the most part, relatively sane.

The paper's conclusion can be read in more than one way:

Furthermore, an argument that might support mergers and acquisitions (M&A) in the pharmaceutical sector can be harvested from this analysis. Currently, M&As in this industry are driven by product portfolios rather than by drug discovery competencies. With the current need for innovative drugs, R&D skills of pharmaceutical companies might again become more important. The technological complementarity of two companies is often quoted as an important factor for successful M&As in the long term. If compound libraries are regarded as a kind of company knowledge-base, then a high degree of complementarity is clearly desirable and would improve drug discovery skills. Based on our data, the libraries of BHC and SAG are structurally complementary and fit together well in terms of their physico-chemical properties. However, it remains to be proven if this leads to additional innovative products.

Not so sure about that, myself. I don't know how good a proxy the compound collections are, since the represent an historical record as much as they do the current state of a company. And that paragraph glosses over the effect of mergers on R&D itself - it's not like just adding pieces together, that's for sure. The track record for mergers generating "additional innovative products" is not good. We'll see how the Bayer-Schering one holds up. . .

Comments (13) + TrackBacks (0) | Category: Business and Markets | Drug Assays | Drug Industry History

May 2, 2011

Pfizer: Breaking Up Is Hard to Do

Email This Entry

Posted by Derek

Matthew Herper has a good piece over in Forbes on the speculation that Pfizer might devolve. Here's his breakdown of how five (or so) separate Pfizer-derived companies could be worth substantially more than the current entity.

But, as he notes, we're talking about several different things here. Were I a long-suffering Pfizer shareholder (which, outside of index funds, I have tried not to be), I would have one perspective on this, similar to this one. It would all be about the stock price:

“The stock can only go up if they break up the company and cut research and development,” says Jami Rubin, a pharmaceuticals analyst at Goldman Sachs who has been pushing a Pfizer breakup for three years. “When Read was announced as the new chief executive Wall Street was skeptical, but he’s listening and he’s responding to what we have been saying. My sense is he’s already made up his mind.”

As an observer of (and participant in) the drug industry, though, I have other views, and they're more like these:

Not everyone agrees that a breakup is the right fix for Pfizer, which has struggled to invent new blockbusters even as it acquired Warner-Lambert for $114 billion in 2000, Pharmacia for $60 billion in 2003 and Wyeth for $68 billion in 2009. Those big mergers sidetracked its researchers and salespeople and created baroque management structures—at one point there were 17 layers between the chief executive and the lowest employee. Critics say undoing them risks similar distraction. As one fund manager said, a breakup would just mean the investment bankers and lawyers who got rich putting Pfizer together will now get richer taking it apart, without improving its ability to invent and market drugs, already a struggle. “I think it’s financial engineering. I think it makes the stock more valuable,” says Les Funtleyder, a fund manager at Miller Tabak. “From a strategic point of view, would it solve the problem? No.”

That's the problem, all right. I've made this point in various ways over the years, but let me be as blunt as possible: I think that Pfizer's consolidation, both of large companies and of small ones, has been a disaster for drug discovery in general. Just the sheer loss of intellectual diversity is enough to call it that. And the resulting huge, ugly omelet cannot be unscrambled. The disruptions in all those research organizations can never be undone, not without a fleet of fully powered time machines.

It will give many people (I'm one) some cold satisfaction to see the company reverse course, admit that the mega-merger strategy has been a mistake all along, and painfully retrace its steps. But that's not much compensation, is it? Not compared to what's been lost.

Comments (36) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

April 11, 2011

R&D Is For Losers?

Email This Entry

Posted by Derek

Now here's a piece that I'm looking for good reasons to dismiss. And I think its author, Jim Edwards, wouldn't mind some, too. You've probably heard that Valeant Pharmaceuticals is making a hostile offer for Cephalon, a company that's dealing with some pipeline/patent problems (and, not insignificantly, the recent death of their founder and CEO).

Valeant's CEO, very much alive, is making no secret of his business plan for Cephalon should he prevail: ditch R&D as quickly as possible:

“His approach isn’t one that most executives in the drug business take,” (analyst Timothy) Chiang said in telephone interview last week. “He’s even said in past presentations: ‘We’re not into high science R&D; we’re into making money.’ I think that’s why Valeant sort of trades in a league of its own.”

. . .Pearson’s strategy and viewpoint on research costs have been consistent. When he combined Valeant with drugmaker Biovail Corp. in September, he cut about 25 percent of the workforce, sliced research spending and established a performance-based pay model tied to Valeant’s market value.

“I recognize that many of you did not sign up for either this strategy or operating philosophy,” Pearson wrote in a letter to staff at the time. “Many of you may choose not to continue to work for the new Valeant.”

Valeant does, in fact, make plenty of money. But my first thought (and the first thought of many of you, no doubt) is that it's making money because other people are willing to do the R&D that they themselves are taking a pass on. In other words, there's room for a few Valeants in the industry, but you couldn't run the whole thing that way, because pretty soon there'd be nothing for those whip-cracking revenue-maximizing managers to sell. Would there?

But we don't have to go quite that far. Edwards, for his part, goes on to wonder (as many have) whether the drug industry should settle out into two groups: the people that do the R&D and the people that sell the drugs. This idea has been proposed as a matter of explicit government policy (a nonstarter), but short of that, has been kicked around many times. Most of the time, this scheme involves smaller companies doing the research, with the big ones turning into the regulatory/sales engines, but maybe not:

If you agree that there ought to be a division of labor in the pharma business — that some companies should develop drugs and then sell those products to the companies that have the salesforces to market them — then this says some interesting things about recent corporate strategy moves among the largest companies. Pfizer (PFE) is downsizing its R&D operations and Johnson & Johnson (JNJ) is said to be on the prowl for a ~$10 billion acquisition.

Merck, on the other hand, is doubling down on its own research and stopped giving Wall Street guidance in hopes of lessening the scrutiny paid to its R&D expense base.

.

The heralds of this restructuring of the industry haven't quite called it this way, but instead splitting from each other, perhaps the big companies will divide into two camps (Merck vs. Pfizer) and the smaller ones, too (Valeant vs. your typical small pharma). Prophecy's not an exact science - Marx thought that Germany and England would be the first countries to go Communist, you know.

For my part, I think that there are game-theory reasons why a big company won't explicitly renounce R&D. As it is, a big company can signal that "Yes, we'd like to do a deal for your drug (or your whole company), but you know, there are other things for us to do with the money if this doesn't work out." But if you're only inlicensing, then no, there aren't so many other things for you to do with the money. Everyone else can look around the industry and see what's available for you to buy, and thus the price of your deals goes up. You have no hidden cards from your internal R&D to play (or to at least pretend like you're holding). This signaling, by the way, is directed to the current and potential shareholders as well: "Buy our stock, because you never know what our brilliant people are going to come up with next". That's a more interesting come-on line than "Buy our stock. You never know who we're going to buy next." Isn't it?

And that's a separate question from the even bigger one of whether there are enough compounds out there to inlicense in the first place. No, I think that big companies will hold onto their own R&D in one form or another. But we'll see who's right.

Comments (47) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

April 7, 2011

What's Really Killing Pharma

Email This Entry

Posted by Derek

Update: link fixed!

Anthony Nicholls over at OpenEye really unburdens himself here, in a post that I recommend to anyone in the business (or anyone who wants to see what some of our problems are). Some highlights:

I have come to believe (and I admit that this is only a theory) that as more and more of pharma’s budget was funneled into advertising and direct marketing to both the general public and to doctors themselves, the path to the top in pharma ceased to be via the lab bench and instead was by way of Madison Avenue. . .

. . .I want to end with one of my favorite management insanities- the push within big pharma to remake themselves in the image of biotechs—the reasoning being that biotechs “get things done” and are more productive. Leaving aside the fact that over its history, biotech as a whole has mostly lost money (with only two years of profit in the last twenty-five), I wonder if it occurs to upper management that the principal difference between big pharma and biotech is simply much less upper management. If they are truly serious about making pharma like biotech, then upper management should simply resign. I’m confident that one step would do wonders for innovation.

There's a lot of good stuff in there, on management fads, dealing with the scientific staff, bean-counting, and more. Regular readers of this blog (and its comments section) will find a lot of their opinions reflected, for sure. . .

Comments (88) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

March 28, 2011

Value in Structure?

Email This Entry

Posted by Derek

A friend on the computational/structural side of the business sent along this article from Nature Reviews Drug Discovery. The authors are looking through the Thomson database at drug targets that are the subject of active research in the industry, and comparing the ones that have structural information available to the ones that don't: enzyme targets (with high-resolution structures) and and GPCRs without it. They're trying to to see if structural data is worth enough to show up in the success rates (and thus the valuations) of the resulting projects.

Overall, the Thomson database has over a thousand projects in it from these two groups, a bit over 600 from the structure-enabled enzymes and just under 500 GPCR projects. What they found was that 70% of the projects in the GPCR category were listed as "suspended" or "discontinued", but only 44% of the enzyme projects were so listed. In order to correct for probability of success across different targets, the authors picked ten targets from each group that have led, overall, to similar numbers of launched drugs. Looking at the progress of the two groups, the structure-enabled projects are again lower in the "stopped" categories, with corresponding increases in discovery and the various clinical phases.

You have to go to the supplementary info for the targets themselves, but here they are: for the enzymes, it's DPP-IV, BCR-ABL, HER2 kinase, renin, Factor Xa, HDAC, HIV integrase, JAK2, Hep C protease, and cathepsin K. For the receptor projects, the list is endothelin A receptor, P2Y12, CXCR4, angiogensin II receptor, sphingosine-1-phosphate receptor, NK1, muscarinic M1, vasopressin V2, melatonin receptor, and adenosine A2A.

Looking over these, though, I think that the situation is more complicated than the authors have presented. For example, DPP-IV has good structural information now, but that's not how people got into the area. The cyanopyrrolidine class of inhibitors, which really jump-started the field, were made by analogy to a reported class of prolyl endopeptidase inhibitors (BOMCL 1996, p. 1163). Three years later, the most well-characterized Novartis compound in the series was being studied by classic enzymology techniques, because it still wasn't possible to say just how it was binding. But even more to the point, this is a well-trodden area now. Any DPP-IV project that's going on now is piggybacking not only on structural information, but on an awful lot of known SAR and toxicology.

And look at renin. That's been a target forever, structure or not. And it's safe to say that it wasn't lack of structural information that was holding the area back, nor was it the presence of it that got a compound finally through the clinic. You can say the same things about Factor Xa. The target was validated by naturally occurring peptides, and developed in various series by classical SAR. The X-ray structure of one of the first solid drug candidates in the area (rivaroxaban) bound to its target, came after the compound had been identified and the SAR had been optimized. Factor Xa efforts going on now also are standing on the shoulders of an awful lot of work.

In the case of histone deacetylase, the first launched drug in that category (SAHA, vorinostat) has already been identified before any sort of X-ray structure was available. Overall, that target is an interesting addition to the list, since there are actually a whole series of them, some of which have structural information and some of which don't. The big difficulty in that area is that we don't really know what the various roles of the different isoforms are, and thus how the profiles of different compounds might translate to the clinic, so I wouldn't say that structural data is helping with the rate-determining steps in the field.

On the receptor side, I also wouldn't say that it's lack of structural information that's necessarily holding things back in all of those cases, either. Take muscarinic M1 - muscarinic ligands have been known for a zillion years. That encompasses fairly selective antagonists, and hardly-selective-at-all agonists, so I'm not sure which class the authors intended. If they're talking about antagonists, then there are plenty already known. And if they're talking about agonists, I doubt that even detailed structural information would help, given the size of the native ligand (acetylcholine).

And the vasopressin V2 case is similar to some of the enzyme ones, in that there's already an approved drug in this category (tolvaptan), with several others in the same structural class chasing it. Then you have the adenosine A2A field, where long lists of agonists and antagonists have been found over the years, structure or not. The problem there has been finding a clinical use for them; all sorts of indications have been chased over the years, a problem that structural information would have not helped with in the least.

Now, it's true that there are projects in these categories where structure has helped out quite a bit, and it's also true that detailed GPCR structures would be welcome (and are slowly coming along, for that matter). I'm not denying either of those. But what does strike me is that there are so many confounding variables in this particular comparison, especially among the specific targets that are the subject of the article's featured graphic, that I just don't think that its conclusions follow.

Comments (32) + TrackBacks (0) | Category: Drug Development | Drug Industry History | In Silico

March 21, 2011

The Small Drug Companies And the Big Ones

Email This Entry

Posted by Derek

Here's a fascinating post from Bruce Booth on the R&D numbers for Big Pharma versus everyone else. If you had to guess, how much would you put big-company spending up against all the privately-financed startups? How many Lilliputians does it take to outweigh Gulliver?

Well, it turns out that the top 20 pharma companies spend about 26 times the budget of all the venture-backed companies put together. In fact, just comparing Pfizer's R&D budget alone to the universe of privately financed companies suggests that one Pfizer equals about 1000 small biotechs, or about 2-and-a-half times the number that exist today. Sheesh.

There are a lot of other interesting numbers to be found in that post - for example, given reasonable assumptions about facility costs, Big Pharma probably spends as much on its utility bills and building maintenance to fund the entire universe of VC-backed companies today. The whole thing looks very much like a steep power-law distribution to me, and that raises the question that Booth raises himself: how much more bang for the buck are we getting from the small companies, relative to the larger ones?

Comments (17) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

March 15, 2011

Pfizer: Bigger, Um, Isn't Better?

Email This Entry

Posted by Derek

As everyone who follows the industry knows, Pfizer has spent the last twenty years just getting bigger and bigger. Not that they haven't shed people, buildings, and whole research sites - have they ever - but they've shed those resources after buying them first. And as everyone who follows the industry knows, Pfizer's own labs have, either through bad luck or something more systemic, been rather unproductive during that same period. And now Lipitor moves ever closer to its patent expiration. What to do?

Well, this post by Matthew Herper at Forbes has one analyst's answer, and it might just be what Pfizer's CEO is thinking as well. It's something new, all right: get smaller.

Bernstein Pharmaceuticals analyst Tim Anderson has a note out this morning suggesting that Pfizer could sell, spin off, or otherwise divest divisions accounting for $32 billion of its $67 billion in sales, reinventing itself as a pure pharmaceutical research firm like Eli Lilly, Bristol-Myers Squibb, or AstraZeneca.
“We recently met with Pfizer’s new CEO Ian Read, and had we not heard it firsthand, we might not have appreciated just how serious he is about potentially splitting up the company,” Anderson writes. He goes on to say that Pfizer may shrink its revenue base by 40%, leaving behind only what Read calls the “innovative core."

The more cynical among you might be saying "Where this innovative core, eh?", but hear the guy out. He's talking about ditching all of Pfizer's non-pharma assets, and cutting back to. . .discovering drugs. Combine that with the recent cutbacks in various therapeutic areas, and you have a Pfizer that's actually turning its back on the strategy of the last two decades. Bigger, as it turns out, has not been better. Who knew?

Well, a lot of people, for sure. I've been complaining about it, genius that I am, for years now, but I'm sure not alone. It's interesting to see someone at the top, though, who's willing to admit this and to act on it. If he does, though, it'll be impossible not to wonder what might have been if the company hadn't made the big round trip through all those acquisitions. The core pharma assets that they're thinking about cutting back to are the pieces and hunks of a lot of other companies, whose people and departments have been shaken and jerked around something fierce. What shape would they be in if they hadn't been Pfizerized? We'll never know.

Comments (33) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

March 7, 2011

The Costs of Drug Research: Beginning a Rebuttal

Email This Entry

Posted by Derek

Note: a follow-up post to this one can be found here.

I've had a deluge of emails asking me about this article from Slate on the costs of drug research. It's based on this recent publication from Donald Light and Rebecca Warburton in the London School of Economics journal Biosocieties, and it's well worth discussing.

But let's get a few things out of the way first. The paper is a case for the prosecution, not a dispassionate analysis. The authors have a great deal of contempt for the pharmaceutical industry, and are unwilling (or unable) to keep it from seeping into their prose. I'm tempted to reply in kind, but I'm supposed to be the scientist in this discussion. We'll see how well I manage.

Another thing to mention immediately is that this paper is, in fact, not at all worthless. In between the editorializing, they make some serious points, and most of these are about the 2003 Tufts (diMasi) estimate of drug development costs. This is the widely-cited $802 million figure, and the fact that it's widely cited is what seems to infuriate the authors of this paper the most.

Here are their problems with it: the Tufts study surveyed 24 large drug companies, of which 10 agreed to participate. (In other words, this is neither a random nor a comprehensive sample). The drugs used for the study numbers were supposed to be "self-originated", but since we don't know which drugs they were, it's impossible to check this. And since the companies reported their own numbers, these would be difficult to check, even if they were made available drug-by-drug (which they aren't). Nor can anyone be sure that variations in how companies assign costs to R&D haven't skewed the data as well. We may well be looking at the most expensive drugs of the whole sample; it's impossible to say.

All of these are legitimate objections - the Tufts numbers are just not transparent. Companies are not willing to completely spread their books out for outside observers, in any industry, so any of these estimates are going to be fuzzy. Light and Warburton go on to some accounting issues, specifically the cost-of-capital estimate that took their estimated cost for a new drug from 400 million to 800 million. That topic has been debated around this blog before, and it's important to break that argument into two parts.

The first one is whether it's appropriate to consider opportunity costs at all. I still say that it is, and I don't have much patience for the "argument from unfamiliarity". If you commit to some multi-year use of your money, you really are forgoing what you could have earned with it otherwise. You're giving it up - it's a cost, whether you're used to thinking of it that way or not. But the second part of the argument is, just how much could you have earned? The problem here is that the Tufts study assumes 11% returns, which is just not anywhere near realistic. Mind you, it's on the same order of fantasy as the returns that have been assumed in the past inside many pension plans, but we're going to be dealing with that problem for years to come, too. No, the Tufts opportunity cost numbers are just too high.

Then there's the tax situation. I am, I'm very happy to say, no expert on R&D tax accounting. But it's enough to say that there's arguing room about the effects of the various special tax provisions for expenditures in this area. And it's complicated greatly by different treatment in different part of the US and the world. The Tufts study does not reduce the gross costs of R&D by tax savings, while Light and Warburton argue otherwise. Among other points, they argue that the industry is trying to have it both ways - that cost-of-capital arguments make R&D expenditures look like a long-term investment, while for tax purposes, many of these are deductible each year as more of an ordinary business expense.

Fine, then - I'm in agreement, on general principles, with Light and Warburton when they say that the Tufts study estimates are hard to check and likely too high. But here's where we part company. Not content to make this point, the authors turn around and attempt to replace one shaky number with another. The latter part of their paper, to me, is one one attempt after another to push their own estimate of drug R&D costs into a world of fantasy. Their claim is that the median R&D cost for a new drug is about $43 million. This figure is wrong.

For example, they have total clinical trial and regulatory review time dropping (taken from this reference - note that Light and diMasi, lead author of the Tufts study, are already fighting it out in the letter section). But if that's true why isn't the total time from discovery to approval going down? I've been unable to find any evidence that it is, and my own experience certainly doesn't make me think that the process is going any faster.

The authors also claim that corporate R&D risks are much lower than reported. Here they indulge in some rhetoric that makes me wonder if they understand the process at all:

Reports by industry routinely claim that companies must test 5000-10000 compounds to discover one drug that eventually comes to market. Marcia Angell (2004) points out that these figures are mythic: they could say 20,000 and it would not matter much, because the initial high-speed computer screenings consume a small per cent of R&D costs. . .

The truth is, even a screen of 20,000 compounds is tiny. And those are real, physical, compounds, not "computer screenings". It's true, though, that high-throughput screening is a small part of R&D costs. But the authors are mixing up screening and the synthesis of new compounds. We don't find our drug candidates in the screening deck - at least, not in any project I've worked on since 1989. We find leads there, and then people like me make all kinds of new structures - in flasks, dang it, not on computers - and we test those. Here, read this.

The authors go on to say:

Many products that 'fail' would be more accurately described as 'withdrawn', usually because trial results are mixed; or because a company estimates that the drug will not meet their high sales threshold for sufficient profitability. The difference between 'failure' and 'withdrawal' is important, because many observers suspect that companies withdraw or abandon therapeutically important drugs for commercial reasons. . .

Bring out some of those observers, then! And bring on the list of therapeutically important drugs that have been dropped out of the clinic just for commercial reasons. Please, give us some examples to work with here, and tell me how the disappointing data that the companies reported at the time (missed endpoints, tox problems) were fudged. Now, I have seen a compound fall out of actual production because of commercial reasons (Pfizer's Exubera), but that was partly because it didn't turn out to be as therapeutically important as the company convinced itself that it would be.

And here's another part I especially like:

Company financial risk is not only much lower than usually conveyed by the '1 in 5000' rhetoric, but companies spread their risks over a number of projects. The larger companies are, and the more they merge with or buy up other companies, the less risk they bear for any one R&D project. The corporate risk of R&D for companies like Pfizer or GlaxoSmithKinen are thus lower than for companies like Intel that have only a few innovations on which sales rely.

Well, then. That means that Pfizer, as the biggest and most-merged-up drug company in the world, must have minimized its risk more than anyone in the industry. Right? And they should be doing just fine by that? Not laying people off right and left? Not closing any huge research sites? Not wondering frantically how they're going to replace the lost revenue from Lipitor? Not telling people that they're actually ditching several therapeutic areas completely because they don't think than can compete in them, given the risks? Not announcing a stock buyback program, because they apparently (and rather shamefully) think that's a better use of their money than putting it back into more R&D? I mean, how can Intel be doing better than that? It's almost like chip design is a different sort of R&D business entirely.

Well, this post is already too long, and there's more to discuss in another one, at least. But I wanted to add one more argument from economic reality, an extension of those little questions about Pfizer. If the cost of R&D for a new drug really were $43 million, as Light and Warburton would have it, and the financial and tax advantages so great, why isn't everyone pouring money into the drug industry? Why aren't VC firms lining up to get in on this sweet deal? I mean, $43 million for a drug, you should be able to raise that pretty easily, even in this climate - and then you just stand back as the money gushes into the sky. Don't you?

Why are drug approval rates so flat (or worse?) Why all the layoffs? Why all the doom and gloom? We're apparently doing great, and we never even knew.

Comments (48) + TrackBacks (0) | Category: Business and Markets | Clinical Trials | Drug Development | Drug Industry History | Drug Prices | Why Everyone Loves Us

February 11, 2011

Drug Problems: A Diagnosis

Email This Entry

Posted by Derek

There's no shortage of "What's Wrong With the Drug Industry" article these days. I wanted to call attention to another one that's just appeared in JPET. I don't agree with all of it, but it does make some important points.

If I had to give a one-line summary of its thesis, it would be "Drug discovery forgot pharmacology and lost its way". The author, Michael Williams of Northwestern (and of 35 years at Merck, Novartis, Abbott, and Cephalon) is a pharmacologist himself, and feels that the genomics era (and indeed, the whole target-driven molecular biology era) has a lot to answer for. He also thinks that people have become seduced by technology:

Rather than creating synergies by using multiple complementary
technologies to find answers to discrete questions in a focused and coherent manner, technology-driven drug discovery has become a discipline that justifies its existence by searching for questions. An example of this is the proteomics approach to target validation, where the intrinsic complexity of the protein component of a cell or tissue necessitates a reductionistic approach where experimental samples must be separated into bins to facilitate analysis with timelines for data generation that can stretch into months or years.

To those with a technology bent, new iterations on a technology, regardless of its utility, inevitably become “must haves,” with acquisition and implementation becoming ends unto themselves. . .

One place I disagree with him is in his assertion that "Implicit in the HTS/combinatorial chemistry paradigm was/is that each target was equally facile as a starting point for a drug discovery project". That hasn't been my experience at all - there's always been a lot of arguing about which targets should be taken to screening and of what kind (how many GPCRs versus enzymes versus what-have you). Williams makes his point in the context of the genomics frenzy, when it was thought that all kinds of targets would be emerging. But at least where I worked, the hope was that genomics would provide a lot of good, tractable target that we hadn't known about, rather than just a long list of orphan receptors and whatzitases. (Mind you, that list is exactly what we ended u with).

Williams then discusses the problem of whether some targets are, in the end, truly intractable. The "just one more whack at it, and we'll get there" approach sometimes works, but it does try the patience:

Drugs active at opioid receptors remain the gold standard of analgesic care and include morphine, codeine, and oxycodone. With the discovery of the mu, delta, and kappa receptor subtypes in the 1970s, it was anticipated that development of selective agonists for these receptors would result in drugs that had a reduced liability for the respiratory depression, tolerance, constipation, and addiction associated with classical opioids. Some 40 years later, despite considerable efforts in medicinal chemistry and molecular biology to refine/define the structural characteristics of receptor-selective NCEs, the ”holy grail” of side effect-free opioids appears as elusive as ever, with a multitude of compounds showing compelling preclinical data but failing to demonstrate these properties in the clinic. . .

Another of his examples in this line are the muscarinic ligands, which I know from personal experience, as a search of my name through the literature and patent databases will show. And although GPCRs are among the most valuable target classes of all, we still have to face up to some disturbing facts about them:

Thus, for both of these G protein-coupled receptor families, a major question is whether their function is so critical, nuanced, and complex as to preclude advances based on the molecular approaches currently being used that may lack the necessary heuristic relationship to the complexity/redundancies of the systems present in a more physiological or disease-related milieu. Based on progress over the past 40 years, it may well be concluded that the opioid and muscarinic receptor families represent intractable targets in the search for improved small-molecule therapeutics. But maybe the next NCE….???

At the end of the article is a table of possible approaches to get out of the preclinical swamp. Interestingly, it's noted that it was "generated at the request of one of the reviewers", who probably asked what the author proposed to do about all this. I won't reproduce it all here, but it boils down to being more rigorous about data and statistics, using the hardest, most real-world models, and giving people the time to pursue these approaches even if they're going against the crowd while doing so. I don't see any his recommendations that I disagree with, but (and this isn't his fault), I don't see any of them that I haven't seen before, either. There needs no ghost, my lord, come from the grave, to tell us this.

Comments (23) + TrackBacks (0) | Category: Drug Development | Drug Industry History

February 10, 2011

The Top 200 Drugs

Email This Entry

Posted by Derek

If you haven't seen the "Top 200 Drugs" posters, available as PDFs from this group at the University of Arizona, then give them a look. It's good to have this information in graphical form, with chemical structures attached.

One thing that stands out as you browse through the table is the number of compounds that make you say "Hold it - that's a drug?" I think that's one of the most valuable things about the poster, actually. It's worth seeing how simple some useful compounds are (valproic acid, anyone?), or what functional groups have made it through. The next edition of the poster will surely feature Gilenya (fingolimod), whose structure baffles and offends almost every chemist at first glance.

It's a dose of humility, seeing these things. And while it's true that we get regular doses of humiliation in the research business, our pride is pretty resilient, too.

Comments (45) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

February 8, 2011

Too Much Outsourcing: Has the Line Been Crossed?

Email This Entry

Posted by Derek

We've talked a lot about outsourcing on this blog, since it's been one of the biggest features of life in this industry over the last few years.

It's not hard to see why. Costs. We spend too much money finding drugs (which don't always make it back even when they succeed). Anything that cuts costs more than it cuts productivity is going to be tried.

But any idea can be taken too far. Here's Boeing's current CEO, talking about the cost overruns on the 787 Dreamliner project, and how they were made worse by overzealous outsourcing:

. . .the 787's global outsourcing strategy — specifically intended to slash Boeing's costs — backfired completely.

"We spent a lot more money in trying to recover than we ever would have spent if we'd tried to keep the key technologies closer to home," Albaugh told his large audience of students and faculty.

Boeing was forced to compensate, support or buy out the partners it brought in to share the cost of the new jet's development, and now bears the brunt of additional costs due to the delays.

Read the whole article; it's extremely interesting, and especially so for those of us in the drug industry. There was a Boeing employee who specifically criticized this process some years ago, and the whole return-on-net-assets view of the business world, and the company seems (belatedly) to be giving him his due. His line about how the biggest return would come from having someone else build the plane and then slapping a tiny Boeing decal on the nose is funny, but in a painful way.

So here's the question: have companies in our industry reached this point? And if so, which ones? Reports like this one make me think that some organizations have crossed that invisible line, and will regret it. I think that "zero outsourcing" is probably a bad idea. But "way too much outsourcing" could be worse. . .

Comments (48) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

February 4, 2011

Merck's Strategy vs. Pfizer's

Email This Entry

Posted by Derek

Here's an interesting contrast after all the Pfizer discussion here over the last few days. Merck's CEO, Ken Frazier, has actually pulled the firm's earnings-per-share guidance, saying that the recent trouble with vorapaxar and regulatory concerns in general make it impossible to say for certain what EPS growth will be. He also says that he'd rather have a freer hand to pay for both sales and research, in the interest of long-term growth.

Not everyone's buying it:

Analysts on Merck’s conference call were skeptical about the reasoning behind the guidance change. Catherine Arnold of Credit Suisse, who called the change “befuddling” in her note to investors, told Frazier that investors expected Merck to “share the pain” of shareholders and noted that vorapaxar, launching in 2012, should have been a “drag on earnings, not a positive.” Frazier replied that Merck’s cost-cutting efforts were ahead of schedule, but that he was faced with a decision to either withdraw guidance or commit to cutting projects that could make money in the future. He also argued that because Merck’s sales reps already visit cardiologists to sell heart drugs, selling vorapaxar, too, would not cost much more.

Well, if he's sincere in this, I have to salute the guy. I don't think that the Schering-Plough merger was a good thing, and Merck has certainly laid off people and disrupted a lot of things because of it. But if they're not going to pull a Pfizer - which I will define for now as "Keep cutting to make the numbers, and when you can't do that any more, then go out and buy someone else who has things to sell and then cut them" - then good for Merck. This topic came up explicitly during the earnings conference call:

Jami Rubin - Goldman Sachs Group Inc.: More of a strategic question. Just given the setback that you've faced with vorapaxar, I'm just wondering if you can provide us with your view of the research model going forward? I mean, might it make sense for some of these the very large, very expensive, very risky outcomes trial such as vorapaxar, how do you buffer these trials? I mean, might it have made sense to isolate some of these subgroups before pursuing a large trial, and I know that it's obviously what's happening with anacetrapib. Maybe if you could talk just in terms of how you see the R&D spend going forward. Also, it's interesting that yesterday or the day before Pfizer announced a significant cut to its R&D. And I'm just wondering if you can talk about your R&D spend going forward, and if you see opportunities to really rethink that budget and to improve the R&D output. . .

Kenneth Frazier You asked some very typical questions in that set of questions. Let me start with vorapaxar. So I assume that what you're essentially asking is in hindsight, could we have done two separate trials. One in the ACF population, one with essentially the prevention population. I can't comment on the trial design. It was so long ago, but what I can say is that as we, as a committee with Peter and Adam and Peter Kellogg and myself, what we do regularly in the company is try to assess all the programs that we're relying on. We try to look at them from a science and technical and medical standpoint. We also try to look at them from a commercial standpoint. So we try to engage each program one by one, in addition to having the kinds of tough metrics we have in place around ROI and value creation in the pipeline. What I would also say is that we recognize that our strategy comes with it a certain amount of complexity, lengthiness and unpredictability because we are seeking innovative medically important therapies. And with vorapaxar, we know the risk of trying new mechanisms and approaches. I still continue to have optimism because the DSMB continued in 2P, we will see what the data shows. If the data shows a benefit to that population, this could still be a very important drug going forward.

On the Pfizer question, obviously, I can't comment on anyone else's view of their particular pipeline or the investment requirements that they face at this time. But I will tell you that we are mindful of the need to drive productivity, greater productivity in our R&D program. Peter Kim and his colleagues understands that we are focused on it. We are trying to take cost out. We're trying to increase the probability of success as we go forward. But as a company, I think we are saying that we are committed to innovation as a strategy, and we believe that over the long term it will pay off. And if you'll indulge me one minute, last week I attended the funeral of John Horan, who was the CEO of Merck a number of years ago before Roy Vagelos. One of the things he was proud us of was that he kept the focus on research during a fallow period for Merck Research in the 70s, and that's exactly what led to a state of innovation that has made the modern-day Merck. So I am not blind to what investors want us to do. They want us to invest in prudent ways and ways that actually drive ROI and productivity. But we, as a company, believe that the only sustainable strategy in the health care environment that we're in is real innovation that makes the difference to patients and payers. . .

As I said above, I can disagree with some of the ways that Merck is trying to run its R&D business, not that they're asking for advice from me. But it at least appears as if their heart - and their head - might be in the right place. Or they at least want to make it appear as if they're in the right place. And that they're willing to tick off some Wall St. analysts in order to be seen to be doing that. Which should count for something - you'd think.

Comments (37) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

February 1, 2011

The NIH's New Drug Discovery Center: Heading Into the Swamp?

Email This Entry

Posted by Derek

I've been meaning to comment on the NIH's new venture into drug discovery, the National Center for Advancing Translational Sciences. Curious Wavefunction already has some thoughts here, and I share his concerns. We're both worried about the gene-o-centric views of Francis Collins, for example:

Creating the center is a signature effort of Dr. Collins, who once directed the agency’s Human Genome Project. Dr. Collins has been predicting for years that gene sequencing will lead to a vast array of new treatments, but years of effort and tens of billions of dollars in financing by drug makers in gene-related research has largely been a bust.

As a result, industry has become far less willing to follow the latest genetic advances with expensive clinical trials. Rather than wait longer, Dr. Collins has decided that the government can start the work itself.

“I am a little frustrated to see how many of the discoveries that do look as though they have therapeutic implications are waiting for the pharmaceutical industry to follow through with them,” he said.

Odd how the loss of tens of billions of dollars - and vast heaps of opportunity cost along the way - will make people reluctant to keep going. And where does this new center want to focus in particular? The black box that is the central nervous system:

Both the need for and the risks of this strategy are clear in mental health. There have been only two major drug discoveries in the field in the past century; lithium for the treatment of bipolar disorder in 1949 and Thorazine for the treatment of psychosis in 1950.

Both discoveries were utter strokes of luck, and almost every major psychiatric drug introduced since has resulted from small changes to Thorazine. Scientists still do not know why any of these drugs actually work, and hundreds of genes have been shown to play roles in mental illness — far too many for focused efforts. So many drug makers have dropped out of the field.

So if there are far too many genes for focused efforts (a sentiment with which I agree), what, exactly, is this new work going to focus on? Wavefunction, for his part, suggests not spending so much time on the genetic side of things and working, for example, on one specific problem, such as Why Does Lithium Work for Depression? Figuring that out in detail would have to tell us a lot about the brain along the way, and boy, is there a lot to learn.

Meanwhile, Pharmalot links to a statement from the industry trade group (PhRMA) which is remarkably vapid. It boils down to "research heap good", while beating the drum a bit for the industry's own efforts. And as an industrial researcher myself, it would be easy for me to continue heaping scorn on the whole NIH-does-drug-discovery idea.

But I actually wish them well. There really are a tremendous number of important things that we don't know about this business, and the more people working on them, the better. You'd think. What worries me, though, is that I can't help but believe that a good amount of the work that's going to be done at this new center will be misapplied. I'm really not so sure that the gene-to-disease-target paradigm just needs more time and money thrown at it, for example. And although there will be some ex-industry people around, the details of drug discovery are still likely to come as a shock to the more academically oriented people.

Put simply, the sorts of discoveries and project that make stellar academic careers, that get into Science and Nature and all the rest of them, are still nowhere near what you need to make an actual drug. It's an odd combination of inventiveness and sheer grunt work, and not everyone's ready for it. One likely result is that some people will just avoid the stuff as much as possible and spend their time and money doing something else that pleases them more.

What do I think that they should be doing, then? One possibility is the Pick One Big Problem option that Wavefunction suggests. What I'd recommend would also go against the genetic tracery stuff: I'd put money into developing new phenotypic assays in cells, tissues, and whole animals. Instead of chasing into finer and finer biochemical details in search of individual targets, I'd try to make the most realistic testbeds of disease states possible, and let the screening rip on that. Targets can be chased down once something works.

But it doesn't sound like that's what's going to happen. So, reluctantly, I'll make a prediction: if years of effort and billions of dollars thrown after genetic target-based drug discovery hasn't worked out, when done by people strongly motivated to make money off their work, then an NIH center focused on the same stuff will, in all likelihood, add very little more. It's not like they won't stay busy. That sort of work can soak up all the time and money that you can throw at it. And it will.

Comments (36) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development | Drug Industry History

January 31, 2011

What's the Most Worthwhile New Drug Since 1990?

Email This Entry

Posted by Derek

A query from a reader prompts me to ask this question, in preparation for a rather long post in the new future. What do you think is the most worthwhile new pharmaceutical brought to market since 1990? That's an arbitrary cutoff, but twenty years is a reasonable sample size. And I'll let everyone define "worthwhile" as they see fit - improvement over existing drugs, opening new therapeutic areas, cost-effectiveness, what have you. Just be sure to make your case, briefly, when you nominate a candidate. Let's see, first off, if it's a topic that can be agreed on at all.

Comments (61) + TrackBacks (0) | Category: Drug Development | Drug Industry History

January 5, 2011

How to Fund a Nonprofit Drug Company - And Others?

Email This Entry

Posted by Derek

Here's a business idea for a nonprofit drug company, sent along by reader and entrepreneur Matt Grosso. I don't necessarily think that it would work (see below), but it's worth talking about, since some of its features are worthwhile. Others, though, illustrate what may be some common misperceptions of how drug development works. Here's the key feature:

The idea here is to create a non-profit which would accept contributions for testing and bringing to market specific drugs. . .Members would vote with their contribution dollars for specific drugs. Paid staff would curate a wiki that supported periodic comparisons between various candidates approaching readiness for a specific market, which would ensure that that member votes had the benefit of the best available information and expert opinion.

This could create an alternate route for drug startups focused on particular compounds to get their product to market.

I think that the ability to specifically take in contributions is a good one - people and organizations are more likely to fund defined aims that they agree with. One big problem, though, is that there's a limit to which we can define such things in this business. And that might make the whole idea break down.

To be honest, if a nonprofit really took in contributions for the development of specific drugs, they'd run a great risk of disappointing and enraging their donation base. That's because the honking huge majority of specific drugs in development never make it. The success rates in the clinic are pretty well known: roughly 90% of everything that goes into clinical trials never makes it to market. That's a hard sell for contributors! And if you moved the point at which you asked for donations back into the preclinical stage, the situation would get much, much worse. At the "Hey, we just thought of a neat new target" step, you'd be offering your contributors worse odds and payoffs than they could get in the state lottery.

For new compounds and new modes of action, the risks decrease in roughly the following order. At the same time, the time it takes to get an answer increases in the roughly the same way:

1. Specific single compound with a defined mechanism. Hold your breath, and good luck!
2. Defined chemical class of compounds targeting the same mechanism. Now you've got some fallback, although it might not be enough to help in case of trouble.
3. Specific mechanism, with several chemical series. This gives you several shots, although if your mechanism of action is off, all will still be in vain.
4. Phenotypic readout with a range of compounds (that is, they seem to do the right thing, but you're not sure how). Risk varies according to how realistic your assays are, and how many different compounds you've picked up.
5. Targeting a broad class of related mechanisms - for example, "reduce LDL", "disrupt bacterial membranes", "interrupt inflammatory cascade". Note that we're now getting farther and farther away from individual compounds.
6. Targeting one specific therapeutic area: antivirals, Alzheimer's, osteoporosis, etc.
7. Trying to balance things out with several therapeutic areas, with projects in each one at varying levels of risk.

Note that we've also illustrated the progression from "wing and a prayer startup" to "fully integrated drug company". That follows exactly from the levels of risk involved, which correlates with the amount of money on the table as well, in exactly the way the ranking of poker hands correlates with how likely they are to occur. Note also that even in that final stage, we apparently still have not mitigated the risks enough, given our cost structure. (Look at the state of the industry).

To get back to the nonprofit idea, another thing that might work out less well in practice than it does in principle might be that wiki for the potential investors/donors. This is what companies try to do internally: comparing their programs by the same criteria, head to head, then determining how to resource them. 'Taint easy. I don't know of any organization that truly thinks that they do as well at this as they should. Even a bunch of perfectly clear-headed and honest assessments (which, by the way, cannot be universally assumed) are still complicated by unquantifiable risks. I think that people might be alarmed by the number of times you just have to push things ahead to see what's going to happen.

Even after all these qualifications, though, I think that there's merit in the idea of breaking out individual drug development programs. I've long kicked around the idea of whether a company could fund programs by essentially selling shares in its various clinical candidates, with a cut of the profits coming if things work out. It would be an accounting mess, and everyone would have to keep those failure rates in mind, but there are still people who'd be willing to take a crack at it, for a given level of possible return. Those donors/investors might even be less put out than the charitable/nonprofit ones - everyone's had investments go bad, but no one wants to feel like their charitable donation was wasted. Thoughts?

Comments (20) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

January 3, 2011

And So, 2011

Email This Entry

Posted by Derek

So, let's get things underway around here: 2010 was, as has been the rule, Not A Good Year for the drug industry. But overall, I think it did break the pattern that had been going since about 2006, of each year being worse than the one before. That's just an impression, mind you, but perhaps some sort of bottom has been reached?

We'll find out. My guess is that 2011 will end up looking more like the prelude to 2012. We have a number of patent expirations coming up (with Lipitor, late this year, as the marquee event), but they'll probably affect next year's earning's more than this year. (Note that if you're a research-driven drug company, these things are bad news, but if you're a generic company (or a drug store chain), the picture is much rosier.

Predictions for this year can be entered in the comments section. Which company looks to have the best time of it, and which the worst?

Comments (9) + TrackBacks (0) | Category: Drug Industry History

December 13, 2010

Big Pharma's Lost Stock Market Decade

Email This Entry

Posted by Derek

Talking about Pfizer's stock price the other day let several people to note in the comments that it's not just PFE stock that's had a bad ten years: a lot of other big drug companies have, too, including some (like Lilly) that have very much declined to grow by merging. And it's true, as this chart will show.
drug%20company%20stock%20chart.jpg
This is a sampling of some big US-based pharma companies that have been around during the whole ten-year span. Note that J&J is actually ahead of the index (in red), and it and Abbott are the only two that can claim that distinction. (They're also the only two on the list with a significant medical devices/diagnostics presence - coincidence?)

The pure drug plays have all been pretty rough. Merck, Bristol-Myers Squibb, and Lilly are right down there with Pfizer. What I was trying to get across the other day, though, was not that Pfizer had been awful relative to its peers, but that it's been just as bad. All that merger activity, all that turmoil, has come down to this: same lousy performance as the other big companies. What, from an investing standpoint, has it done for anyone?

Now (as was also pointed out in the comments last week), these charts neglect reinvested dividends, but an S&P index fund's performance would also show some effect from that, too (although not as large as for some individual stocks, for sure). Another big point: we'll never be able to run the control experiment of dialing back the time machine and letting Pharmacia/Upjohn, Warner-Lambert, and Wyeth all stay un-Pfizered. (Not to mention what Pfizer might be were it to have remained un-super-sized). There are too many variables. All we can say is that there's no evidence that any of the big boardroom-level strategies have been superior to any other.

But given the way drug discovery has been going the last ten or fifteen years, it's hard to see anything making such charts look good, mergers or no mergers. That brings up a causality problem, too - it's important to remember that while mergers don't seem to have been doing any favors for drug research, the existing problems of drug research are what have led to many mergers. What was it that David Foster Wallace once said - that the definition of a harmful addiction is something that presents itself as the cure for the problems it's causing?

Update: in case you're wondering if this is just an effect of starting ten years ago (when the market was much livelier), you can use that Google Finance link to move the starting point back. From what I can see, you have to go back to 1994 or 1995 to find a point at which most of the drug stocks would have outperformed the S&P 500 (and as that last-ten-year chart shows, all of that happens early). Merck lags for a long time, and Bristol-Myers Squibb and Lilly still aren't above the line even if you start in the mid-1980s.

Comments (32) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

December 10, 2010

Have Pfizer's Investors Had Enough?

Email This Entry

Posted by Derek

It's taken a while, but have Pfizer's long-suffering investors finally had enough? FiercePharma has a roundup of stories that suggest that some of the institutions are upset over the abrupt departure of Jeffrey Kindler this past weekend. The quote that leaps out is one from an unnamed hedge fund manager who calls the current board "value destroyers".

Who'd disagree? But who would think that it would take this long for such people to realize the value that's been shredded over the years by Pfizer's acquire-acquire-acquire strategy? Here's ten years of Pfizer versus the S&P500. Up until 2004, with a couple of brief excursions, Pfizer stock basically tracks the index. After that, it lags badly. Over a decade of hard work on Wall Street, analyzing Pfizer's prospects, peering into their books, assessing their portfolio, weighing the chances for each drug, the ramifications of each acquisition: in vain. All in vain, because you'd have done far, far better with the money by parking it in an index fund and walking away to do something more meaningful with your time. Not that you wouldn't have lost money doing that; the S&P 500, damn it all, is down over a ten-year span. But you'd have lost a lot more if you'd listened to Pfizer's press releases or anyone who recommended that you buy their stock.

I've been complaining here about Pfizer's strategy since at least 2003, but it's not like I'm happy about being right. So many people have had their lives disrupted by Pfizer's acquisitions, and there's been so little return on all of it that it's hard to feel good about anything associated with the company's recent history.

And now that all these gigantic deals have been done, the employees have been jerked around, and the facilities closed, what are these big investors proposing to do about it? An angry committee has been formed to discuss strategic barn-door-closing initiatives, but the horses are over the horizon.

Comments (54) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

November 30, 2010

More Advice From Andrew Witty

Email This Entry

Posted by Derek

Andrew Witty of GSK has a one-page essay in The Economist on the problems of the drug industry. None of the background he gives will be news to anyone who reads this site, as you'd imagine - lower rates of success in discovery, higher costs, patent expirations, etc.

Here's his take on research and development:

. . .it is clear that the size of the industry will continue to contract in the drive for efficiency. For some players, more mergers and acquisitions are likely, but others will plan to shrink, and all parts of the value chain from R&D through to production and sales and marketing will be affected. . .

. . .In the past the problem of R&D in big pharmaceutical companies has been “fixed” by spending more and by using scale to “industrialise” the research process. These are no longer solutions: shareholders are not prepared to see more money invested in R&D without tangible success. If anything, based on a rational allocation of capital, R&D should now be consuming less resource.

Yikes. I'm not sure where that last sentence comes from, to be honest with you. Does Witty think that we now know so much about what we're doing that it shouldn't cost so much for us to do it? Or that it shouldn't cost so much to comply with the regulatory authorities, for some reason? I'm a bit baffled, and if someone can explain that "rational allocation" that he speaks of, I'd be grateful.

And I'd like to say that the rest of the piece advances some useful ideas, but I can't do that with a straight face. (To be fair, if Andrew Witty has some great ideas for making GSK more productive, he's most certainly not going to lay them out for everyone in The Economist). So it's all innovative business models, dynamic partnerships, recapturing creative talent in the drug labs, and so on. That last line will no doubt inspire a lot of bitter comment, considering what things have been like at GSK in the last few years.

His main pitch seems to be that drug companies need a "fair reward for innovation", and that's one of those things that's hard to disagree with on the surface. But unpacking it, that's the tough part, because everyone involved will start disagreeing on what's innovative, what might constitute a reward, and (especially) what's fair. Witty has been giving speeches on this for a while now, and I'd say that this latest article is just the condensed version.

Comments (55) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

November 23, 2010

Of Deck Chairs, Six Sigma, And What Really Ails Us

Email This Entry

Posted by Derek

We talked a little while back here about "Lean Six Sigma" as applied to drug discovery organizations, and I notice that the AstraZeneca team is back with another paper on the subject. This one, also from Drug Discovery Today, at least doesn't have eleventeen co-authors. It also addresses the possibility that not everyone in the research labs might welcome the prospect of a business-theory-led revolution in the way that they work, and discusses potential pitfalls.

But I'm not going to discuss them here, at least not today. Because this reminds me of the post last week about the Novartis "Lab of the Future" project, and of plenty of other initiatives, proposals, alliances, projects, and ideas that are floating around this industry. Here's what they have in common: they're all distractions.

Look, no one can deny that this industry has some real problems. We're still making money, to be sure, but the future of our business model is very much in doubt. And those doubts come from both ends of the business - we're not sure that we're going to be able to get the prices that we've been counting on once we have something to sell, and we're not sure that we're going to have enough things to sell in the first place. (There, that summarized about two hundred op-ed pieces, some of them mine, in one sentence. Good thing that I'm not paid by the word for this blog.) These problems are quite real - we're not hallucinating here - and we're going to have to deal with them one way or another. Or they're going to deal with us, but good.

I just don't think that tweaking the way that we do things will be enough. We're not going to do it by laying out the labs differently, or putting different slogans up on the walls, or trying schemes that promise to make the chemists 7.03% more productive or reduce downtime in the screening group by 0.65 assays/month. This is usually where people trot out that line about rearranging deck chairs on the Titanic, but the difference is, we don't have to sink. The longer things go on, though, the more I worry that incremental improvements aren't going to bail us out.

This is a bit of a reversal for me. I've said for several years that the low success rates in the industry mean that we don't necessarily have to make some huge advance. After all, if we made it up to just 80% failure in the clinic, that would double the number of drugs reaching the market. That's still true - but the problem is, I don't see any signs of that happening. If success rates are improving anywhere, up and down the whole process from target selection to Phase III, it's sure not obvious from the data we have.

What worries me is that the time spent on less disruptive (but more bearable) solutions may be taking away from the time that needs to be spent on the bigger changes. I mean, honestly, raise your hands: who out there thinks that "Lean Six Sigma" is the answer to the drug industry's woes? Right. Not even all the consultants selling this stuff could get that one out with a straight face. "But it'll help!" comes the cry, "and it's better than doing nothing!". Well, in the short term, that may be true, although I'm not sure if there is a "short term" with some of these things. If it gives managers and investors the illusion that things are really being fixed, though, and if it takes mental and physical resources away from fixing them, then it's actually harmful.

What would it take to really fix things? Everyone knows - really, everyone does. Some combination of progress on the following questions would do just fine:

1. A clear-eyed look at target-based drug design, by which I mean, whether we should be doing it at all. More and more, I worry that it's been a terrible detour for the whole project of pharmaceutical research. There have been successes, of course, but man, look at the failures. And the number of tractable targets (never high) is lower than ever, as far as I can tell. If we're going to do it, though, we need. . .

2. The ability to work on harder target classes. The good ol' GPCRs and the easy-to-inhibit enzyme classes are still out there, and still have life in them, but the good ideas are getting thinner. But there are plenty of tougher mechanisms (chief among them protein-protein interactions) that have a lot of ideas running around looking for believable chemical matter. Making some across-the-board progress in those areas would be a huge help, but it would avail us not without. . .

3. Better selection of targets. Too many compounds fail in the clinic because of efficacy, which means that we didn't know enough about the biology going in. Most of our models of disease have severe limitations, and in many cases, we don't even know what some of those limitations are until we step into them. Maybe we can't know enough in many cases, so we need. . .

4. More meaningful clinical trials. And by that I mean, "for a given cost", because these multi-thousand-people multi-year things, which you need for areas like cardiovascular, Alzheimer's, osteoporosis, and so on, are killing us. We've got a terrible combination of huge potential markets in areas where we hardly know what we're doing. And that leads to gigantic, expensive failures. Could they somehow be less expensive? One way would be. . .

5. A better - and that means earlier - handle on human tox. I don't know how to do this one, either, but there are billions of dollars waiting for you if you can. Efficacy is the big killer in the late clinic these days, but that and toxicity put together account for a solid majority of the failures all the way through. (The rest are things like "Oops, maybe we should sell this program off" kinds of decisions).

There are plenty of others, but I think that improvements in those would fix things up just fine. Don't you? And maybe I'm just slow-witted, but I can't see how changing the way the desks face, or swapping out all the business cards for new titles, or realigning the therapeutic area teams - again - are going to accomplish any of it. At best, these things will make the current process run a bit better, which might buy us some more time before we have to confront the big stuff anyway. At worst, they'll accomplish nothing at all, but just give the illusion that something's being done.

To be fair, there are some initiatives around the industry that address these (and the other) huge problems. As I said, it's not like no one knows what they are. And to be fair, these really are difficult things to fix. Saying that you want to get a better early read on human tox in the clinic, the way I just did so blithely, is easy - actually doing something about it, or even finding a good place to start doing something about it, is brutally hard. But it's not going to be as brutal as what's been happening to us the last few years, or what's we're headed for if we don't get cracking.

Comments (53) + TrackBacks (0) | Category: Business and Markets | Clinical Trials | Drug Development | Drug Industry History

November 18, 2010

Halaven: Holder of the Record

Email This Entry

Posted by Derek

The FDA has approved Eisai's Halaven (eribulin) for late-stage breast cancer. As far as I can tell, this is now the most synthetically complex non-peptide drug ever marketed. Some news stories on it are saying that it's from a marine sponge, but that was just the beginning. This structure has to be made from the ground up; there's no way you're going to get enough material from marine sponges to market a drug.
200px-Eribulin.svg.png
If anyone has another candidate, please note it in the comments - but I'll be surprised if there's anything that can surpass this one. There have been long syntheses in the industry before, of course, although we do everything we can to avoid them. Back when hydrocortisone was first marketed by Merck, it had a brutal synthetic path for its time. (That's where a famous story about Max Tishler came from - one of the intermediates was a brightly colored dinitrophenylhydrazone. Tishler, it's said, came into the labs one day, saw some of the red solution spilled on the floor, and growled "That better be blood") And Roche's Fuzeon is a very complicated synthesis indeed, but much of that is repetitive (and automated) peptide coupling. It took a lot of work to get right, but I'd still give the nod to eribulin. Can anyone beat it?

Comments (44) + TrackBacks (0) | Category: Cancer | Chemical News | Drug Industry History

November 11, 2010

Comment of the Day: Outsourcing and Architecture

Email This Entry

Posted by Derek

From reader Jose, in the comments thread to the most recent post:

"Published I find it ironic that so many pharma sites who hired hotshot architects to design labspaces that foster as much personal interaction as possible, are now pumping the virtues of collaborations across 10 time zones."

Comments (1) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Life in the Drug Labs

November 9, 2010

Where Drugs Come From: By Country

Email This Entry

Posted by Derek

The same paper I was summarizing the other day has some interesting data on the 1998-2007 drug approvals, broken down by country and region of origin. The first thing to note is that the distribution by country tracks, quite closely, the corresponding share of the worldwide drug market. The US discovered nearly half the drugs approved during that period, and accounts for roughly that amount of the market, for example. But there are two big exceptions: the UK and Switzerland, which both outperform for their size.

In case you're wondering, the league tables look like this: the US leads in the discovery of approved drugs, by a wide margin (118 out of the 252 drugs). Then Japan, the UK and Germany are about equal, in the low 20s each. Switzerland is in next at 13, France at 12, and then the rest of Europe put together adds up to 29. Canada and Australia put together add up to nearly 7, and the entire rest of the world (including China and India) is about 6.5, with most of that being Israel.

But while the US may be producing the number of drugs you'd expect, a closer look shows that it's still a real outlier in several respects. The biggest one, to my mind, comes when you use that criterion for innovative structures or mechanisms versus extensions of what's already been worked on, as mentioned in the last post. Looking at it that way, almost all the major drug-discovering countries in the world were tilted towards less innovative medicines. The only exceptions are Switzerland, Canada and Australia, and (very much so) the US. The UK comes close, running nearly 50/50. Germany and Japan, though, especially stand out as the kings of follow-ons and me-toos, and the combined rest-of-Europe category is nearly as unbalanced.

What about that unmet-medical-need categorization? Looking at which drugs were submitted here in the US for priority review by the FDA (the proxy used across this whole analysis), again, the US-based drugs are outliers, with more priority reviews than not. Only in the smaller contributions from Australia and Canada do you see that, although Switzerland is nearly even. But in both these breakdowns (structure/mechanism and medical need) it's the biotech companies that appear to have taken the lead.

And here's the last outlier that appears to tie all these together: in almost every country that discovered new drugs during that ten-year period, the great majority came from pharma companies. The only exception is the US: 60% of our drugs have the fingerprints of biotech companies on them, either alone or from university-derived drug candidates. In very few other countries do biotech-derived drugs make much of a showing at all.

These trends show up in sales as well. Only in the US, UK, Switzerland, and Australia did the per-year-sales of novel therapies exceed the sales of the follow-ons. Germany and Japan tend to discover drugs with higher sales than average, but (as mentioned above) these are almost entirely followers of some sort.

Taken together, it appears that the US biotech industry has been the main driver of innovative drugs over the past ten years. I don't want to belittle the follow-on compounds, because they are useful. (As pointed out here before, it's hard for one of those compounds to be successful unless it really represents some sort of improvement over what's already available). At the same time, though, we can't run the whole industry by making better and better versions of what we already know.

And the contributions of universities - especially those in the US - has been strong, too. While university-derived drugs are a minority, they tend to be more innovative, probably because of their origins in basic research. There's no academic magic involved: very few, if any, universities try deliberately to run a profitable drug-discovery business - and if any start to, I confidently predict that we'll see more follow-on drugs from them as well.

Discussing the reasons for all this is another post in itself. But whatever you might think about the idea of American exceptionalism, it's alive in drug discovery.

Comments (33) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

November 4, 2010

Where Drugs Come From: The Numbers

Email This Entry

Posted by Derek

We can now answer the question: "Where do new drugs come from?". Well, we can answer it for the period from 1998 on, at any rate. A new paper in Nature Reviews Drug Discovery takes on all 252 drugs approved by the FDA from then through 2007, and traces each of them back to their origins. What's more, each drug is evaluated by how much unmet medical need it was addressed to and how scientifically innovative it was. Clearly, there's going to be room for some argument in any study of this sort, but I'm very glad to have it, nonetheless. Credit where credit's due: who's been discovering the most drugs, and who's been discovering the best ones?

First, the raw numbers. In the 1997-2005 period, the 252 drugs break down as follows. Note that some drugs have been split up, with partial credit being assigned to more than one category. Overall, we have:

58% from pharmaceutical companies.
18% from biotech companies..
16% from universities, transferred to biotech.
8% from universities, transferred to pharma.

That sounds about right to me. And finally, I have some hard numbers to point to when I next run into someone who tries to tell me that all drugs are found with NIH grants, and that drug companies hardly do any research. (I know that this sounds like the most ridiculous strawman, but believe me, there are people - who regard themselves as intelligent and informed - who believe this passionately, in nearly those exact words). But fear not, this isn't going to be a relentless pharma-is-great post, because it's certainly not a pharma-is-great paper. Read on. . .

Now to the qualitative rankings. The author used FDA priority reviews as a proxy for unmet medical need, but the scientific innovation rating was done basically by hand, evaluating both a drug's mechanism of action and how much its structure differed from what had come before. Just under half (123) of the drugs during this period were in for priority review, and of those, we have:

46% from pharmaceutical companies.
30% from biotech companies.
23% from universities (transferred to either biotech or pharma).

That shows the biotech- and university-derived drugs outperforming when you look at things this way, which again seems about right to me. Note that this means that the majority of biotech submissions are priority reviews, and the majority of pharma drugs aren't. And now to innovation - 118 of the drugs during this period were considered to have scientific novelty (46%), and of those:

44% were from pharmaceutical companies.
25% were from biotech companies, and
31% were from universities (transferred to either biotech or pharma).

The university-derived drugs clearly outperform in this category. What this also means is that 65% of the pharma-derived drugs get classed as "not innovative", and that's worth another post all its own. Now, not all the university-derived drugs showed up as novel, either - but when you look closer, it turns out that the majority of the novel stuff from universities gets taken up by biotech companies rather than by pharma.

So why does this happen? This paper doesn't put it one word, but I will: money. It turns out that the novel therapies are disproportionately orphan drugs (which makes sense), and although there are a few orphan-drug blockbusters, most of them have lower sales. And indeed, the university-to-pharma drugs tend to have much higher sales than the university-to-biotech ones. The bigger drug companies are (as you'd expect) evaluating compounds on the basis of their commercial potential, which means what they can add to their existing portfolio. On the other hand, if you have no portfolio (or have only a small one) than any commercial prospect is worth a look. One hundred million dollars a year in revenue would be welcome news for a small company's first drug to market, whereas Pfizer wouldn't even notice it.

So (in my opinion) it's not that the big companies are averse to novel therapies. You can see them taking whacks at new mechanisms and unmet needs, but they tend to do it in the large-market indications - which I think may well be more likely to fail. That's due to two effects: if there are existing therapies in a therapeutic area, they probably represent the low-hanging fruit, biologically speaking, making later approaches harder (and giving them a higher bar to clear. And if there's no decent therapy at all in some big field, that probably means that none of the obvious approaches have worked at all, and that it's just a flat-out hard place to make progress. In the first category, I'm thinking of HDL-raising ideas in cardiovascular and PPAR alpha-gamma ligands for diabetes. In the second, there are CB1 antagonists for obesity and gamma-secretase inhibitors in Alzheimer's (and there are plenty more examples in each class). These would all have done new things in big markets, and they've all gone down in expensive flames. Small companies have certainly taken their cuts at these things, too, but they're disproportionately represented in smaller indications.

There's more interesting stuff in this paper, particularly on what regions of the world produce drugs and why. I'll blog about again, but this is plenty to discuss for now. The take-home so far? The great majority of drugs come from industry, but the industry is not homogeneous. Different companies are looking for different things, and the smaller ones are, other things being equal, more likely to push the envelope. More to come. . .

Comments (34) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

October 13, 2010

Well, Okay: The Ugliest Biopharma Sites?

Email This Entry

Posted by Derek

In response to a reader query in the comments to yesterday's post on scenic research sites, I guess we should explore the other end of the scale. Nominations for the ugliest/most depressing research site are now open. This is physical surroundings, folks, not mental atmosphere, not that that can't get oppressive at times. We're looking for things that can be captured by a camera. There can be a connections, though - as Kingsley Amis put it ("Aberdarcy, Main Square"):

The journal of some bunch of architects
Named this the worst town center they could find
But how disparage what so well reflects
Permanent tendencies of heart and mind?

Looking back, Schering-Plough's old Bloomfield site was not exactly a sweeping vista of loveliness, but (to be fair) it did look better than some of the rest of the neighborhood, and the Home Depot and parking lot that replaced it during the 1990s have probably never made anyone's heart leap, either. Sticking with the N. New Jersey sites, some of which are going to be strong contenders in this category, it's unlikely that either Merck's buildings in Rahway or Roche's in Nutley have inspired much lyric poetry. Other nominations?

Note: in the spirit of that Amis reference, those who find themselves affected by nasty industrial landscapes might want to cheer along with John Betjeman's "Slough".

Comments (35) + TrackBacks (0) | Category: Drug Industry History

October 12, 2010

Most Picturesque Biopharma Location?

Email This Entry

Posted by Derek

I'm sitting in my conference, listening to a guy from Emerald Biostructures, the former deCODE. They're in a site out on Bainbridge Island near Seattle - I've talked with several people from out there, and they all talk about riding the ferry out in the morning, etc. Now, Cambridge is OK, but it ain't Bainbridge Island as far as scenery goes. (However, as someone who used to life and work in northern NJ, I have to be happy with what I have!)

So here's my question: what's the most scenic, envy-inducing location for a biopharma research site? For these purposes, we'll rank by natural beauty - if there's some biotech that's leasing the top floors of the Chrysler Building, and I sure don't think that there is, we'll take them up as a separate category. Nominations?

Comments (53) + TrackBacks (0) | Category: Drug Industry History

Drug Discovery History

Email This Entry

Posted by Derek

One of the speakers here yesterday recommended Walter Sneader's Drug Discovery: A History, which I haven't read. It looks good, though, for a look back on how we got here. He also showed some drug structure "family trees" from Sneader's earlier book, Drug Prototypes and Their Exploitation. I haven't seen a copy of that one in quite a while, and no wonder: the only copy shown on Amazon is used, for $500. Sheesh.

Comments (16) + TrackBacks (0) | Category: Book Recommendations | Drug Industry History

October 11, 2010

Princeton's New Chemistry Building

Email This Entry

Posted by Derek

So I believe that they're moving into the new chemistry building at Princeton, which is a mighty glass whopper. In light of some of the past discussions we've had around here about lab design, I'd be interested in hearing from anyone with personal experience of the building. I can't really get a good sense of the layout from the pictures I've seen, just that there sure seem to be a lot of glass walls. And those aren't necessarily bad; it's the way the labs are put together and their relationship the desks and offices.

Interestingly, much of the money for its construction seems to have come from the university's royalties on Alimta (pemetrexed), a folate anticancer drug discovered by Ted Taylor's group there in the early 1990s and developed by Lilly. (Taylor, a heterocyclic chemistry legend, worked on antifolates for many, many years, and contributed a huge amount to the field).

Here's more on the building, and here are some photos, and here are some architectural renderings, for what those are worth. Any comments from folks on the ground?

Comments (29) + TrackBacks (0) | Category: Cancer | Chemical News | Drug Industry History

September 24, 2010

Serendipity in Medicine

Email This Entry

Posted by Derek

I came across this book the other day, and bought it on sight: Happy Accidents: Serendipity in Modern Medical Breakthroughs. From what I've read of it so far, it's a fine one-stop-reference for all sorts of medical discoveries where fortune favored the prepared mind (as Pasteur put it). There are drug discovery tales, surgical procedures, medical devices, and more.

Even the stories I thought I knew well turn out to have more details. Albert Hoffman's famous discovery of LSD, for example - what I hadn't known was that some of his colleagues didn't believe him when he said he'd taken only 0.25mg of a compound and hallucinated violently for hours. (From what we now know, that was actually a heck of a dose!) So Ernst Rothlin, Sandoz's head of pharmacology, and two others tried it themselves. "Rothlin believed it then", Hoffman noted. Those days will never come again!

Comments (9) + TrackBacks (0) | Category: Book Recommendations | Drug Industry History

Avandia Goes Down: A Research Rant

Email This Entry

Posted by Derek

So now Avandia (rosiglitazone) looks to be withdrawn from the market in Europe, and heavily restricted here in the US. This isn't much of a surprise, given all the cardiovascular worries about it in recent years, but hindsight. Oh, hindsight: all that time and effort put into PPAR ligands, back when rosi- and pioglitazone were still in development or in their first few years on the market. Everyone who worked on metabolic diseases took a swing at this area, it seems - I spent a few years on it myself.

And to what end? Only a few drugs in this class have ever made it to market, and all of them were developed before we even knew they they hit the PPAR receptors at all. The only two that are left are Actos (pioglitazone) and fenofibrate, which is a PPAR-alpha compound for lack of any other place to put it. Everything else: a sunk cost.

Allow me to rant for a bit, because I saw yet another argument the other day that the big drug companies don't do any research, no, it's all done at universities with public funds, at which point Big Pharma just swoops in and makes off with the swag. You know the stuff. Well, I would absolutely love to have the people who hold that view explain the PPAR story to me. I really would. The drug industry poured a huge amount of time and money into both basic and applied research in that area, and they did it for years. No one has to take my word for it - ask any of the academic leaders in the field if GSK or Merck, to name just two companies, managed to make any contributions.

We did it, naturally, because we expected to make a profit out of it in the end. The whole PPAR story looked like a great way to affect metabolic disorders and plenty of other diseases as well: cancer, inflammation, cardiovascular. That is, if we could just manage to understand what was going on. But we didn't. Once we all figured out that nuclear receptors were involved and got busy on drug discovery on that basis, we didn't help anyone with any diseases, and we didn't make any profits. Big piles of money actually disappeared during the process, never to be seen again. You could ask Merck about that, or GSK (post-rosiglitazone), or Lilly, or BMS, or Bayer, and plenty of other players large and small.

No one hears about these things. We're understandably reluctant to go on about our failures in this industry, but the side effect is that people who aren't paying attention end up thinking that we don't have any. Nothing could be more mistaken. And they aren't failures to come up with a catchy slogan or to find a good color scheme for the packaging - they're failures back at the actual science, where reality meets our ideas about it, and likely as not beats them down to the floor.

Honestly, I don't understand where these they-don't-do-any-research folks get off. Look at the patent filings. Look at the open literature. Where on earth do you think all those molecules come from, all those research programs to fill up all those servers? There are whole scientific journals that wouldn't exist if it weren't for a steady stream of failed research projects. Where's it all coming from?

Note: previous posts about PPAR drug discovery can be found here, here, and here. Previous posts (and rants) about research in the drug industry (and academia, and the price of it all) can be found here, here, here, here, here, here, here, here, and here.

Comments (49) + TrackBacks (0) | Category: Diabetes and Obesity | Drug Industry History | Regulatory Affairs | Why Everyone Loves Us

September 7, 2010

Columns Outside The Doors

Email This Entry

Posted by Derek

Nature Reviews Drug Discovery has an article on behavior in large drug organizations, which they put together after interviewing a long list of current and former R&D heads. Many of the recommendations are non-startling (find ways to reward people who are willing to take calculated risks, encourage independent thinking, all those things that are easy to write down and hard to implement). One part near the end caught my eye, though:

Companies should examine what we term the 'columns outside the doors' phenomenon and the subtle impact that this form of recognition might have on entrepreneurial behaviour. Smith described this phenomenon, which occurs across the world: as start-up companies become successful, they are relocated from humble laboratories to grander buildings with columns outside their doors. Interestingly, such edifices often violate the observed inverse square relationship between communication among scientists in laboratories and the distance between these laboratories. We offer this insight more as a provocative thought than as a firm recommendation.

And what what reminded me of was a very similar observation by C. Northcote Parkinson, of Parkinson's Law fame:

The outer door, in bronze and glass, is placed centrally in a symmetrical facade. Polished shoes glide quietly over shining rubber to the glittering and silent elevator. The overpoweringly cultured receptionist will murmur with carmine lips into an ice-blue receiver. She will wave you into a chromium armchair, consoling you with a dazzling smile for any slight but inevitable delay. Looking up from a glossy magazine, you will observe how the wide corridors radiate toward departments A, B, and C. From behind closed doors will come the subdued noise of an ordered activity. A minute later and you are ankle deep in the director’s carpet, plodding sturdily toward his distant, tidy desk. Hypnotized by the chief’s unwavering stare, cowed by the Matisse hung upon his wall, you will feel that you have found real efficiency at last.

In point of fact you will have discovered nothing of the kind. It is now known that a perfection of planned layout is achieved only by institutions on the point of collapse. . .

It is by no means certain that an influential reader of this chapter could prolong the life of a dying institution merely by depriving it of its streamlined headquarters. What he can do, however, with more confidence, is to prevent any organization strangling itself at birth. Examples abound of new institutions coming into existence with a full establishment of deputy directors, consultants and executives; all these coming together in a building specially designed for their purpose. And experience proves that such an institution will die. . .

Readers may have a few examples in mind from the drug industry. (The freshly constructed labs at Sterling, for example, completed around the time that Kodak was wiping the place out, are well spoken of). So, those of you in temporary quarters, jammed into buildings that don't quite work, may not be as bad off as you might think.

Comments (25) + TrackBacks (0) | Category: Drug Industry History | Who Discovers and Why

August 20, 2010

Going Hollywood

Email This Entry

Posted by Derek

A reader at one of the big pharma companies sends along this note:

. . .Over my 10 years or so of experience, I have seen a severe decline in risk tolerance at my company, and other large companies as well. When we put a project forward, we are told that either: (a) There are too many unknowns, the target is not well established, and therefore the risk in putting forward the large sums of money required for development are too high; or (b) There are too many other players in the market already and we would never be able to capture enough market share to justify the investment required to go forward. The band considered acceptable in the risk/benefit spectrum has become so narrow that it is like threading a needle with your feet.

I believe that this risk aversion is due to the escalating cost of developing new drugs. Big Pharma has invested such a tremendous amount of money into the infrastructure they deemed necessary to increase project turnaround time that any drug that hoes forward has to be seen as a guaranteed blockbuster or it is considered a failure.

Film buff that I am, I use a Big Studio Production vs. Independent Film analogy when I discuss this with people outside the profession. For example, the film Avatar cost about 300 million to make. That means that if it brings in a mere 50 million in ticket sales, it is a catastrophic failure for the studio. Paranormal Activity on the other hand cost a few tens of thousands of dollars to make. Bringing in 50 million dollars in ticket sales would exceed the filmmakers wildest dreams of avarice.

The end result is that the Big Studio has to KNOW that Avatar will bring in greater than 300 million dollars in ticket sales or it cannot take the risk. Therefore only tried and true box office magic directors like James Cameron are given the opportunity to work at that level. On the other end of the spectrum, an independent film distribution company is willing to take on a high risk project like Paranormal Activity because even a failure will not destroy the comany, and the rewards of success (even if moderate by Big Studio standards) is very high.

So, has Big Pharma doomed itself by massively inflating its drug discovery infrastructure in a misguided attempt to stregnthed its pipeline (which was clearly a failure)? Or is it the regulatory agencies that require such vast and expensive trials that are the cause of this risk aversion? Is there a solution?

Well, the Hollywood analogy has been made before, but that's because it's a pretty good one. There are a few places where it breaks down, though. Some of these are unfavorable to the drug business:

1. Copyright. It lasts a lot longer than patent rights. I think that copyright has been extended to ridiculous levels in the US, but it's always been significantly longer than patent terms. So a studio has a much longer time to makes its money back.

2. Regulatory affairs. There's no FDA approval process for a new film. You think it up, you get it shot and produced, you release it, and good luck to you. The drug industry hasn't worked that way since the 1930s.

3. Cycle time. It takes a lot longer to get a drug project through than it takes to get a movie done. And since time is most definitely money, this hurts.

4. Toxicity and liability. While it's true that a bad film might make you feel sick, it's not going to lead to anything actionable in court. Bad news on a new drug's side effects or performance most definitely will, though. And how.

5. Costs and benefits. A movie, from the consumer's standpoint, is a momentary purchase, made with a small amount of discretionary income. If it delivers, great - if not, no harm done, other than some wasted time and a bit of cash. Drugs, of course, are a much more high-stakes business, both in their pricing and in their utility. And they affect a person's health, which is about as fundamental a thing as you can mess with, and moves any transaction up into a whole new spotlight.

On the other hand, there are some problems that the studios face that we don't:

1. Limits of copyright. While copyright goes on next to forever, it's still easy to move a new film or book right up next to an existing work. Movies get ripped off much more quickly than drugs can be, and often more blatantly. That shorter cycle time cuts both ways.

2. Easier copying. You can find pirated versions of first-run movies pretty quickly - they're not always great, but there's a market. Lots of free stuff gets tossed around in digital formats, too. Drugs are much harder to truly copy, and an inferior version is much, much less attractive.

3. Fashion. An antihypertensive drug from thirty years ago doesn't wear funny-looking retro clothes or pick up a mobile phone the size of a loaf of bread. It lowers your blood pressure, same as always. There may be better ones around now, but it'll still work exactly as it did when it came on the market.

All that said, I think that the key point here is that there's no equivalent in the drug industry to indie filmmaking, which is too bad. Our fixed costs are much, much, higher due to the field we operate in - human health and the regulations around it. My question is - is there any way to bring these down? Of course, that's what everyone in the business has been asking for some time now.

Because if we can't, we're going to see even more of the behavior that my correspondent noted. Risk aversion, I might add, can be fatal to research-driven companies. Our whole business is founded on taking risks, and if the costs are pushing us to deny that, we have a huge conflict right at the center of the whole enterprise. . .

And yeah, I realize that this doesn't help too much with the "less depressing" promise I made for this week!

Comments (36) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Drug Prices | Regulatory Affairs

August 19, 2010

Not The End. Not At All

Email This Entry

Posted by Derek

All right, given the way things have been going the last few years, it's easy to wonder if there's a place for medicinal chemistry at all - even if there's a place for drug discovery. There is. People are continuing to get sick, with diseases that no one can do much about, and the world would be a much better place if that weren't so. I also believe that such treatments are worth money, and that the people who devote their careers to finding them can earn a good living by doing so.

So why are fewer of us doing so? Because - and it needs no ghost come from the grave to tell us this - we're not finding as many of them as we need to, and it's costing us too much when we do. That's not sustainable, but drug discovery itself has to continue. We can't go on, we'll go on. But what we have to do is find new ways of going on.

I refuse to believe that those ways aren't out there somewhere. We do what we do so poorly, because we still understand so little - I can't accept that this is the best we're capable of. It won't take miracles, either. Think of the clinical failure rates, hovering around 90% in most therapeutic areas. If we only landed flat on our faces eight out of ten times in the clinic, we'd double the number of compounds that get through.

I think that we're in the worst stage of knowledge about disease and the human body. We have enough tools to get partway into the details, but not enough to see our way through to real understanding. Earlier ages were ignorant, and (for the most part) they knew it. (Lewis Thomas's The Youngest Science
has a good section on medicine as his own father practiced it - he was completely honest about how little he could do for most of his patients and how much he depended on placebos, time, and hope). Now, thanks to advances in molecular and cell biology, we've begun to open a lot of locked boxes, only to find inside them. . .more locked boxes. (Sorry about all these links. For some reason literature is running away with me this morning). We get excited (justifiably!) at learning things that we never knew, uncovering systems that we never suspected, but we've been guilty (everyone) of sometimes thinking that the real, final answers must be in view. They aren't, not yet.

Pick any therapeutic area you want, and you can see this going on. Cancer: it starts out as dozens of dread diseases, unrelated. Then someone realizes that in each case, it's unregulated cell growth that's going on. The key! Well, no - because we have no idea of how unregulated cell growth occurs, nor how to shut it off. Closer inspection, years and years of closer inspection, yields an astonishing array of details. Growth factor signaling, bypassed cell-death switches and checkpoints, changes in mitotic pathways, on and on. Along the way, many of these look like The Answer, or at least one of The Answers. Think about how angiogenesis came on as a therapeutic idea - Judah Folkman really helped get across the idea that some tumors cause blood vessels to grow to them, which really was a startling thought at the time. The key! Well. . .it hasn't worked out that way, or not yet. Not all tumors do this, and not all of them totally depend on it even when they do, and the ones that do turn out to have a whole list of ways that they can do it, and then they can mutate, and then. . .

There, that's where we are right now. Right in the middle of the forest. We know enough to know that we're surrounded by trees, we know the names of many of them, we've learned a lot - but we haven't learned enough yet to come out the other side. But here's the part that gives me hope: we keep on being surprised. Huge, important things keep on being found, which to me means that there are more of them out there that we haven't found yet. RNA! There's one that's happened well in the middle of my own professional career. When I started in this business, no one had any clue about RNA interference, double-stranded RNAs, microRNAs, none of it. All of it was going on without anyone being aware, intricate and important stuff, and we never knew. How many more things like that are waiting to be uncovered?

Plenty, is my guess. We keep pulling back veils, but the number of veils is finite. We're still ignorant, but we're not going to remain ignorant. We will eventually know the truth, and it'll do what the truth has long been promised to do: make us free.

But we don't have to wait until we know everything. As I said above, just knowing a bit more than we do now has to help. A little more ability to understand toxicology, a better plan to attack protein-protein targets, more confidence in what nuclear receptors can do, another insight into bacterial virulence, viral entry, cell-cycle signaling, glucose transport, lipid handling, serotonin second messengers, bone remodeling, protein phosphorylation, immune response, GPCR mechanisms, transcription factors, cellular senescence, ion channels. . .I could go on. So could you. The list is long, really long, and any good news anywhere on it gives us something else to work on, and something new to try.

So this is a rough time in the drug industry. It really is. But these aren't death throes. They're growing pains. We just have to survive them, either way.

Comments (70) + TrackBacks (0) | Category: Drug Industry History | Who Discovers and Why

July 29, 2010

Open-Source Pharmaceutical Babble

Email This Entry

Posted by Derek

The topic of "open-source" drug discovery is an interesting (and potentially important) one. It just keeps coming up, but one of the problems with it is that it presents a terrible opportunity for vagueness. Too much of what I've read on the subject is hand-waving.

I'm afraid that the key parts of this column fall into the same category. It's by Jackie Hunter, formerly of GlaxoSmithKline. The lead-up parts of the piece are fine, where she lays out some of the problems facing the industry. But then we get this vision:

In the future, the most effective pharmaceutical companies will be hubs at the center of a network of collaborators and suppliers, focusing internally on their core competencies, which might include medicinal chemistry, execution of clinical trials, or sales and marketing. They will facilitate interactions across their network to stimulate the development of innovation ecosystems.

The resulting opportunities to expand beyond traditional products and markets will enable pharmaceutical companies to evolve into companies that offer a range of health-care solutions. These will include not only prescription medicines, but also diagnostics, branded generics, and technologies that support personalized medicine, as well as so-called “neutraceuticals” and other “wellness options.”

And that's it; that's the payoff. We'll all just hop to it, enabling and facilitating, expanding and evolving, stimulating and focusing. None of those are concrete verbs suggesting real courses of action. Whenever you see someone slip into that sort of talk, you can be sure that (at the very least) they have difficulty communicating whatever specific ideas they have. Or (more likely) that they don't have any specific ideas to tell you about at all.

Not that I can blame Jackie Hunter. I don't have a lot of good suggestions at the moment, either. But if you read that column closely, it says (on the one hand) that the problems of the industry are so large that single drug companies probably can't deal with them. Fine. Then it goes on to say that dealing with them will probably reduce the size of drug company R&D organizations. The connection between those two ideas is presumably hidden in that ball of fuzz I quoted above.

Comments (36) + TrackBacks (0) | Category: Drug Development | Drug Industry History

July 12, 2010

Natural Products: Not the Best Fit for Drugs?

Email This Entry

Posted by Derek

Stuart Schreiber and Paul Clemons of the Broad Institute have a provocative paper out in JACS on natural products and their use in drug discovery. As many know, a good part of the current pharmacopeia is derived from natural product lead structures, and in many other cases a natural product was essential for identifying a target or pathway for a completely synthetic compound.

But are there as many of these cases as we think - or as there should be? This latest paper takes a large set of interaction data and tries to map natural product activities on to it. It's already know that there are genes all up and down the "interactome" spectrum, as you'd expect, with some that seem to be at the crossroads of dozens (or hundreds) of pathways, and others that are way out on the edges. And it's been found that disease targets tend to fall in the middle of this range, and not so much in the too-isolated or too-essential zones on either side.

That seems reasonable. But then comes the natural product activity overlay, and there the arguing can start. Natural products, the paper claims, tend to target the high-interaction essential targets at the expense of more specific disease targets. They're under-represented in the few-interaction group, and very much over-represented in the higher ones. Actually, that actually seems reasonable, too - most natural products are produced by organisms as essentially chemical warfare, and the harder they can hit, the better. Looking at subsets of the natural product list (only the most potent compounds, for example) did not make this effect vanish. Meanwhile, if you look at the list of approved drugs (minus the natural products on it), that group fits the middle-range interactivity group much more closely.

But what does that mean for natural products as drug leads? There would appear to be a mismatch here, with a higher likelihood of off-target effects and toxicity among a pure natural-product set. (The mismatch, to be more accurate, is between what we want exogenous chemicals to do versus what evolution has selected them to do). The paper ends up pointing out that additional sources of small molecules look to be needed outside of natural products themselves.

I'll agree with that. But I suspect that I don't agree with the implications. Schreiber has long been a proponent of "diversity-oriented synthesis" (DOS), and would seem to be making a case for it here without ever mentioning it by name. DOS is the idea of making large collections of very structurally diverse molecules, with an eye to covering as much chemical space as possible. My worries (expressed in that link above) are that the space it covers doesn't necessarily overlap very well with the space occupied by potential drugs, and that chemical space is too humungously roomy in any event to be attacked very well by brute force.

Schreiber made a pitch a few years ago for the technique, that time at the expense of small-molecule compound collections. He said that these were too simple to hit many useful targets, and now he's taking care of the natural product end of the spectrum by pointing out that they hit too many. DOS libraries, then, must be just in the right range? I wish he'd included data on some of them in this latest paper; it would be worthwhile to see where they fell in the interaction list.

Comments (58) + TrackBacks (0) | Category: Drug Assays | Drug Industry History | Toxicology

July 1, 2010

GSK's Biotechy World

Email This Entry

Posted by Derek

The Wall Street Journal is out today with a big story on GlaxoSmithKline's current research structure. The diagnosis seems pretty accurate:

Glaxo's experiment is a response to one of the industry's most pressing problems: the failure of gigantic research staffs, formed through a series of mega mergers, to discover new drugs. The mergers helped companies amass potent sales-and-marketing arms, but saddled their R&D with innovation-stifling bureaucracy. . .

The company's current strategy is to break things down into even smaller teams (often with their own names and logos) and to try to apply small-company incentives to them. That goes for both the positive and negative incentives:

The scientists in Glaxo's new biotech-esque groups know the clock is ticking. Called discovery performance units, or DPUs, the groups are about halfway through the three-year budgets they were given in 2008. Glaxo has made it clear that if the team members don't produce, they could get laid off. . .(the company also) says it's trying to get closer to the financial rewards of biotech. In some cases, it is setting aside "a pool of money" for scientists involved in a certain project. . .each time their experimental drug clears a certain hurdle, they get part of the money. . .

Of course, as the article also makes clear, the company has been through supposed newer-and-better re-orgs before. And that included schemes to break the company's research into more independent units. Those "Centers of Excellence in Drug Discovery" were supposed to be the last word eight or ten years ago, but apparently that didn't quite work out. The current philosophy seems to be that the idea didn't go far enough.

True or not? History doesn't give a person much reason for optimism when a large company says that it's going to get more nimble and less bureaucratic. You can make a very good living printing up the posters and running the training seminars about that stuff, but actually getting it to work has been. . .well, has anyone gotten it to work? Andrew Witty, the company's CEO says in the article that he doesn't see any contradiction in having "hugely successful entrepreneurial innovation" inside a big company, but real examples of that are thin on the ground - especially compared to the number of examples of such innovation being fought to the ground when it attempts to spring up.

That's not to say that this approach can't improve things at GSK. I think it's bound to be a good thing to turn people loose to make more of their own decisions, without feeling as if there's someone hovering over their shoulder all the time. But I don't know if it's going to be the revolution that they're hoping for (or the one that they might need).

Comments (62) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why

June 18, 2010

The Economic Impact of the Genomic Revolution's Failure

Email This Entry

Posted by Derek

Here's something that oddly ties together the last couple of days of posting around here: the failure of the Human Genome Project to jump-start drug discovery as the "most significant economic event of the past decade". (Thanks to Jonathan Gitlin for the tip).

I have to say, I hadn't thought of it in those terms. My first thought is that this is a negative event, something that didn't happen, so it's pointless to speculate about what might have been. But the author, Mike Mandel, is also talking about the opportunity cost of all the genomics frenzy, which is a real consideration. That time and money could have been spent somewhere else, doing something more useful. Where would we be then?

I've wondered about that myself, having seen first-hand what happened. Many companies really did cut a deep notch in their development pipelines during that era, abandoning (to one degree or another) their traditional approaches while piling resources into the genomics gold rush. (The current economic environment is cutting a similar gouge into the list of start-up companies - many of the ones that "normally" should have formed during the last couple of years just haven't happened).

Mandel's larger point, though, is something I'm not so sure about. He's talking about all the manufacturing jobs that haven't been created by the basic research, holding that these are the ones with real economic effect. But even if the genomics era had been wildly successful, we wouldn't have seen manufacturing jobs picking up from it for some years - 2008, maybe? His charts, which tend to cover from the early 1990s to date, are reflecting other issues entirely.

Then the talk turns to balance of trade:

Now let’s turn to trade. China, India, and the rest of the developing countries sell the U.S. an increasingly diverse array of goods and services. What does the U.S. provide in return? There’s the usual list of suspects, such as commercial aircraft (which is increasingly drawing on parts made outside of the country). But they are not enough to avoid a huge trade deficit, even now.

The logical candidate for the next wave of U.S. exports should have been biotech products and knowledge. The U.S. is the acknowledged world leader; the research is expensive and lengthy; the production processes are complicated, delicate, require skilled technicians, and cannot be easily offshored. And the category–treatments to deal with major medical problems–is something that everyone wants.

But what happened? Without compelling new biotech products, the big pharma companies were “me-tooed” to death. In fact, pharma trade went from roughly balanced to a big deficit.

That's illustrated by another chart from 1994 on. But what it's showing isn't what he thinks it's showing. It illustrates the move to less costly manufacturing sites, which would have taken place whether genomics would have delivered or not. The only mitigating factor is that any big protein-based biologics would have had a better chance of being produced domestically, but production of all the small-molecule drugs that might have come out of the genomics frenzy would have migrated offshore just like everything else.

And what if the genomics revolution had delivered? We'd have a lot more drugs on the market, none of which would be selling cheaply, you can be sure - and there would be even more anxiety over the amount of our GDP going to health care. (Never mind that some of these drugs would, one hopes, be keeping people from going into even more expensive therapies later - people don't seem to pay attention to that, either). So overall, I take the point about opportunity cost. But his broader economic implications, as least as regards the US economy alone, don't seem to me to hold up.

Comments (26) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

June 14, 2010

Looking Back at the Genome

Email This Entry

Posted by Derek

The New York Times reminded its readers the other day about something that people in medical research have known for quite some time: the human genome has not exactly turned out to be an open book full of readily usable data about human diseases.

It does make a person cringe to go back and read the press releases and speeches that were made back when the genome was first announced. How about Bill Clinton's statement that the genome sequence would "revolutionize the diagnosis, prevention and treatment of most, if not all, human diseases"? Or Francis Collins, predicting "a complete transformation in therapeutic medicine"? He's got about five more years on that one, but I'm not holding my breath.

As I've written here before, though, there was already a deep sense of nervousness among the people searching the sequences for disease clues - not to mention the nervousness among the people who had given them huge piles of money to do so. When the total estimated number of genes came out far lower than most people expected, there was a collective "Hmmm. . ." across the field. That number meant that the simpler possibilities for gene sequence-protein-disease linkage could already be ruled out - complicated things were clearly going on in transcription, translation, and further downstream.

That certainly doesn't mean that genomic sequencing has been a waste of time. It's been a tremendous boon, actually, because this complexity was out there waiting to be uncovered and understood. It's no one's fault that it hasn't led to speedy drug discovery; biology isn't set up for our convenience. And the further improvements that we've seen in sequencing speed and accuracy are going to be crucial if we're to have any chance of figuring out what's going on.

Comments (25) + TrackBacks (0) | Category: Drug Industry History

June 11, 2010

Alzheimer's: Extracting Data From Failed Trials

Email This Entry

Posted by Derek

It's no secret that Alzheimer's disease has been a disastrous area in which to do drug discovery. Every large drug company has had failures in the area, and many smaller ones have gone out of business trying their hands. (I had several years in the field myself earlier in my career, trying three different approaches, none of which panned out in the end).

Now the Coalition Against Major Diseases has announced an open-access database of clinical trial results from failed drug candidates in the area. J&J, GlaxoSmithKline, Abbott, SanofiAventis, and AstraZeneca have contributed data from 11 failed drug candidates, and more look to be on the way from other companies. I hope that Eli Lilly, Merck (their own compounds and those from Schering-Plough), and Pfizer all join in on this - right off the top of my head, I can think of failed drugs from all of them, and I know that there are plenty more out there. (Pfizer seems to have dodged a question about whether or not they're participating, to judge from that Wall Street Journal article linked to above).

It'll be difficult to comb through all this to extract something useful, of course. But without sharing the data on these compounds, it would be utterly impossible for anything to come out of their failures. I think this is an excellent idea, and well worth extended to other therapeutic areas.

Comments (12) + TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | Drug Industry History

June 8, 2010

The Atlantic Monthly on Drug Pipelines

Email This Entry

Posted by Derek

Here's a good piece from Megan McArdle on the pipeline problem in the drug industry. It'll be familiar ground to many readers of this blog (and not just because I was a source for the piece), but it's good to get the word out on these things to as wide an audience as possible.

Comments (15) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Press Coverage

May 12, 2010

Insulin Degrading Enzyme's Turn in the Spotlight

Email This Entry

Posted by Derek

Well, you have to go back to the early days of this blog to find it, but I wrote here about insulin degrading enzyme. The name tells you some of what you need to know about it, for sure - it degrades insulin, so if you could stop that, insulin would probably hang around longer in the bloodstream. There's more to it - it's also been thought to be a way that insulin might be broken up inside cells as well, for one thing - but that's the elevator pitch for it.

And it has indeed been a diabetes target through the years. No one's come up with any really good inhibitors of it, although in vitro studies have been done with things like bacitracin and thioesters. Now a large multicenter academic team, led by the Mayo people from Florida, report some compounds that seem quite potent. (It's worth noting that these inhibitors are somewhat old news if you follow the patent literature).

The structures are not lovely, but there are a lot worse compounds in the protease inhibitor world. One thing that every experienced medicinal chemist will quickly notice about these is that they're hydroxamic acids. Those are compounds with a very spotty past in the business (although there is vorinostat (SAHA) out there on the market). Hydroxamates can be very potent inhibitors of metalloenzymes, and every time you target one they're always out there as a temptation, but the ugly clinical failures in that structural class tend to give people pause. Or was it just the targets (chiefly matrix metalloproteases) that the hydroxamates were aimed it? Have they been unfairly maligned? The arguments continue, and these compounds are unlikely to settle them.

Unless, of course, they go to the clinic and make a big success. I wonder if that's going to happen, though - the "go to the clinic" part, that is. This new paper is an interesting piece of work, and has a lot to say about the strange workings of IDE (which go a ways to explaining why there hasn't been much success targeting it - I was once involved briefly in the area myself). But it has nothing to say about whether these compounds have any exposure in any sort of animal, and that's the beginning of the really tricky part. These new compounds, in addition to be hydroxamic acids, are retro-inverso peptides. That's an old trick in the protease inhibitor world where you flip a natural sequence around and use the unnatural (D) amino acids to build it as well. Off the top of my head, I don't know of any retro-inverso compounds that have actually made it to market, although I'd be glad to be corrected on this.

The other complication will be IDE itself. One reason that no company has made a massive push on the target is that the enzyme is known to be multifunctional, as in "doing totally unrelated things all over the darn place", which makes one nervous about an inhibitor. Foremost among the off-target effects would be the beta-amyloid story (which is what led me to write about the enzyme back in 2003). IDE looks as if it could be one clearance mechanism for beta-amyloid (and perhaps for other easily-aggregating peptides), which has prompted people to think of actually trying to enhance its activity as an Alzheimer's therapy. One group that's tried this is, in fact, the same team that's now reporting the inhibitors (see this paper from 2009).

So I think these compounds will prove useful to figure out what IDE is doing, and that's a worthwhile goal. But I don't see them as drugs, no matter what the press release might say.

Comments (8) + TrackBacks (0) | Category: Diabetes and Obesity | Drug Industry History

May 10, 2010

Malcolm Gladwell on Synta and Oncology

Email This Entry

Posted by Derek

The folks at the New Yorker sent along this link to a new article by Malcolm Gladwell about Synta and their attempts to get elesclomol (STA-4783) to work as a melanoma therapy. (If you don't know how this one turns out, you might want to read the article before clicking on that second link).

Update: didn't realize that the full article was subscriber-only at the New Yorker site. Not sure if there's anything to be done about that, but I've dropped them a line. . .

Gladwell (an occasional reader of this blog) often takes some hits from experts in the fields he writes about, but after reading the article this morning, I think he's done a fine job of showing what drug discovery is like. His division between screening and rational drug design is a bit too sharply defined, to my eyes, but he gets all the important stuff right - namely, just how hard a business this is, how much luck is involved, and how much we don't know. Those are messages that a lot of people need to hear, and I hope that this piece helps get them out to a wide audience.

Comments (8) + TrackBacks (0) | Category: Cancer | Drug Development | Drug Industry History | Press Coverage

May 3, 2010

The Collapse of Complexity

Email This Entry

Posted by Derek

Here's something a bit out of our field, but it might be disturbingly relevant to the drug industry's current situation: Clay Shirky on the collapse of complex societies. He's drawing on Joseph Tainter's archaeological study of that name:

The answer he arrived at was that (these societies) hadn’t collapsed despite their cultural sophistication, they’d collapsed because of it. Subject to violent compression, Tainter’s story goes like this: a group of people, through a combination of social organization and environmental luck, finds itself with a surplus of resources. Managing this surplus makes society more complex—agriculture rewards mathematical skill, granaries require new forms of construction, and so on.

Early on, the marginal value of this complexity is positive—each additional bit of complexity more than pays for itself in improved output—but over time, the law of diminishing returns reduces the marginal value, until it disappears completely. At this point, any additional complexity is pure cost.

Tainter’s thesis is that when society’s elite members add one layer of bureaucracy or demand one tribute too many, they end up extracting all the value from their environment it is possible to extract and then some.

Readers who work in the industry - particularly those at the larger companies - will probably have just shivered a bit. To my mind, that's an eerily precise summation of what's gone wrong in some R&D organizations. Shirky talks about internet hosting companies and the current dilemmas of the large media organizations, but there's plenty of room to include the drug industry in there, too. Look at the way research has been conducted over the past thirty years or so: we keep adding layers of complexity, basically because we have to - more and more assays and screens. It used to be (so I hear) all about dosing animals. Then you had cell cultures, then cloned receptors and enzymes came along (we're heading out of the 1970s and well into the 1980s now, if you're keeping score at home). Outside of target assays, the Ames test came along in the 1970s, and there were liver microsomes and isolated P450 enzymes for stability, Caco-2 cells for permeability, hERG assays to look out for cardiac tox, et cetera. You can do the same thing for the development of animal models - normal rodents, then natural inbred mutations, then knockouts, humanized transgenics. . .you get the picture.

As I say, we have very little choice but to get more complicated, because our knowledge of biology keeps expanding. But while this is going on, everyone keeps thinking that all this new knowledge is (at some point) going to start making things easier - a future era known, informally, as "when we really start figuring all this stuff out". It hasn't happened yet. If you're someone like Ray Kurzweil, you expect this pretty soon. I don't, although I hold out eventual long-term hope.

Shirky's message for the media companies is that their high-value-added lifestyles are being fatally undermined. We're not facing the same situation in this industry - there's no equivalent of free YouTube stuff eating our lunch, and I'm not expecting anything in that line for a long time, if ever. But the complexity-piling-on-complexity problem is real for us, nonetheless. If the burden gets too heavy, we could be in trouble even without someone coming along to push us over.

Comments (35) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

April 21, 2010

Two Bad Ideas

Email This Entry

Posted by Derek

I note that one of the biggest topics in the "What To Tell the C&E News People" comment thread is chemical employment. And it should be - there are far fewer med-chem jobs out there today than there were five years ago, and it's getting harder and harder to imagine things coming back to the way that they once were.

In fact, I don't see any way that they can, at least if by "the way they once were", you mean the number of well-paid US-based positions at large pharma companies. I hate to sound like this, but I think there's been too much of a shift in recent years for anything to undo it. Costs have gone up, drug-development success rates have (at best) not increased, and there are cheaper ways to get a good amount of work done which used to cost more. Which of these things are going to change back, and how?

We can argue about how effective some outsourcing is, but it's definitely not worthless. And we can certainly argue about whether companies have cut too far back in the current downturn. But (and I've said this before around here), what I really have trouble with are two solutions that get proposed every time this topic comes up.

The first of these is "Cut back on work visas". Well, that's the milder form of it - this point of view has a way of slipping down to "Ship 'em all back" sometimes. Either way, what people who advocate this seem to believe is that companies will gladly hire American-based scientists if they're just, you know, forced to. I can't see it. And as I've said here before, I'm not particularly focused on bettering the lives of American scientists as opposed to those coming in from other countries. Many of them become Americans themselves, and I'm glad to have them. We can use all the intelligent, resourceful, hard-working people here that we can get.

The second solution that gets aired out is "Form a Union!" And I have to say that I have even less patience for this one. I'm not a big union fan in general, actually, and I think that in this case it's an even worse idea than usual. What leverage do employees have? Here's the problem that sinks many such ideas: the US is not an island nation, in any sense of the word. If you force the cost of doing business here up even higher, the jobs will leave even faster. There are now places for them to go, which is the biggest change of the last ten or twenty years. Those places are often not quite as good in some ways (for now), but they're a lot less expensive, and that's where the money will flow if the deal looks reasonable. The only thing that will slow this down is if things get cheaper here (which isn't too likely), or if they get more expensive over there (which is quite likely indeed, actually - a topic for another day).

So to me, both of these proposals boil down to forcing companies to pay more for what they can get elsewhere. In my opinion, they're both unworkable and likely to make the situation deteriorate even faster than it is already.

Update: fixed typos, I think. Views remain the same! As to the "scientist shortage" talk that keeps popping up, I agree with the people who are ticked off about that one. We clearly have no great shortage of scientists at the moment in the fields that I have personal experience of. But this is (or ideally should be) something of a separate topic from immigration, and will be the topic of a future post. . .

Comments (116) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

April 14, 2010

Colchicine's Price Goes Through the Roof

Email This Entry

Posted by Derek

We all hear about the new drugs that have just been approved, and we all keep track of the drugs that are coming off patent. But what about the really old ones, the drugs that made it to the market long before today's regulatory framework? There have long been medicines that are generally recognized as reasonably safe and effective, but have never been through much of the modern process.

The FDA has, for the last few years, been trying to catch up on these things, and has offered exclusivity to any manufacturers who are willing to run clinical trials on older medicines. But this hasn't always worked out the way that it was intended - witness the case of colchicine, a well-known natural product drug that's used for some inflammatory diseases (and used to be a chemotherapy agent, too). The Wall Street Journal has a good story on this.

URL Pharma, a generic manufacturer, took the time and trouble to get fresh data on colchicine for gout attacks, and was granted a three-year marketing exclusivity period. So far, so good - but they then turned around and ran the price up by a factor of fifteen. They also filed suit against other small companies that were selling colchicine in the generic market, with the result that other domestic sources of the drug might dry up (four of the other companies are fighting back in court).

So is this the advent of evidence-based medicine, coming to an area that had little of it before, and therefore a good thing? Is it an abuse of the system by a company that saw an opportunity to suddenly acquire pricing power? Is it just what the FDA should have expected, given that three years of marketing rights have to make up for the cost of the clinical work, with the profits likely to disappear immediately afterwards? I think it's going to be hard to have it both ways. If you expect companies to go back and fill in the clinical profile of older drugs, you do need give them some incentive to do it. But then what's to keep them from pounding that incentive in good and hard, as seems to be happening here?

I'm not sure how to split that difference, especially not with any general rule, because each case will probably be different. The new clinical trials might, in fact, uncover something really useful that was previously unknown - or they might just confirm that the way the drug was being dosed was, in fact, just the way it should be dosed. One of those seems more deserving of compensation than the other, but there's no way of knowing which result you're going to get a priori. I have an aversion to telling a company how much it can charge for a drug, but it's not like URL Pharma discovered colchicine, or had to do any of the risky early-stage work on it. I can justify some pricing moves (although not all of them) by companies that are doing discovery research, because so much of that doesn't lead to anything marketable. (Take, for example, virtually everything I've worked on my whole career). But a generic company that's coming in to dot the Is and cross the Ts on the FDA paperwork is something else again.

Perhaps if the FDA really feels that backfilling the regulatory work on drugs that no one owns in particular is important enough, they should fund the work themselves. But that opens up issues of its own, too.

Comments (51) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Drug Prices | Why Everyone Loves Us

April 13, 2010

Too Many Consulting Jobs Work This Way

Email This Entry

Posted by Derek

The Tech, the MIT newspaper, has a very interesting account from one of its recent graduates about a stint he did with the Boston Consulting Group in Dubai. It's partly a look at how different the real world is from taking a whalloping course load at MIT (answer: quite different indeed). But it's also a look at how all too many consulting firms end up doing their work. This is only partly the fault of the consultants:

Despite having no work or research experience outside of MIT, I was regularly advertised to clients as an expert with seemingly years of topical experience relevant to the case. We were so good at rephrasing our credentials that even I was surprised to find in each of my cases, even my very first case, that I was the most senior consultant on the team.

I quickly found out why so little had been invested in developing my Excel-craft. Analytical skills were overrated, for the simple reason that clients usually didn’t know why they had hired us. They sent us vague requests for proposal, we returned vague case proposals, and by the time we were hired, no one was the wiser as to why exactly we were there.

I got the feeling that our clients were simply trying to mimic successful businesses, and that as consultants, our earnings came from having the luck of being included in an elaborate cargo-cult ritual. In any case it fell to us to decide for ourselves what question we had been hired to answer, and as a matter of convenience, we elected to answer questions that we had already answered in the course of previous cases. . .

I can't imagine that the BCG people are very pleased about this series of articles, but I don't think there's much they can do about it. As the author details) he walked away from an end-of-employment payment by refusing to sign a nondisclosure agreement. And not to pick on BCG particularly - because there are plenty of other people in this game - I note that they do advise the pharmaceutical industry from time to time. We are, fortunately, not quite Dubai. But here's a description (their own) of some of their work, and I'll leave it up to the reader to decide if it's an inspiring story of teamwork or an example of cargo-cult self-delusion.

At the onset, the BCG team helped provide structure and facilitation for our client's deep content knowledge, then helped them focus on the most important issues. We worked together to develop critical insights about the current and potential marketplace and the roadmap to success, then created and launched an execution plan that rallied the organization around that roadmap and started them down that path.

You might also want to speculate about how many times those phrases have been cut and pasted before.

Update: fixed with link to the story!

Comments (49) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

March 26, 2010

Diminishing Returns

Email This Entry

Posted by Derek

As we slowly attack the major causes of disease, and necessarily pick the low-lying fruit in doing so, it can get harder and harder to see the effects of the latest advances. Nowhere, I'd say, is that more true than for cardiovascular disease, which is now arguably the most well-served therapeutic area of them all. It's not that there aren't things to do (or do better) - it's that showing the benefit of them is no easy task.

Robert Fortner has a good overview of the problem here. The size of the trials needed in this area is daunting, but they have to be that size to show the incremental improvements that we're down to now. He also talks about oncology, but that one's a bit of a different situation, to my mind. There's plenty of room to show a dramatic effect in a lot of oncology trials, it's just that we don't know how to cause one. In cardiovascular, on the other hand, the space in which to show something amazing has flat-out decreased. This is a feature, by the way, not a bug. . .

Comments (40) + TrackBacks (0) | Category: Cancer | Cardiovascular Disease | Clinical Trials | Drug Industry History

Privileged Scaffolds? How About Unprivileged Ones?

Email This Entry

Posted by Derek

The discussion of "privileged scaffolds" in drugs here the other day got me to thinking. A colleague of mine mentioned that there may well be structures that don't hit nearly as often as you'd think. The example that came to his mind was homopiperazine, and he might have a point; I've never had much luck with those myself. That's not much of a data set, though, so I wanted to throw the question out for discussion.

We'll have to be careful to account for Commercial Availability Bias (which at least for homopiperazines has decreased over the years) and Synthetic Tractability Bias. Some structures don't show up much because they just don't get made much. And we'll also have to be sure that we're talking about the same things: benzo-fused homopiperazines (and other fused seven-membered rings) hit like crazy, as opposed to the monocyclic ones, which seem to be lower down the scale, somehow.

It's not implausible that there should be underprivileged scaffolds. The variety of binding sites is large, but not infinite, and I'm sure that it follows a power-law distribution like so many other things. The usual tricks (donor-acceptor pairs spaced about so wide apart, pi-stacking sandwiches, salt bridges) surely account for much more than their random share of the total amount of binding stabilization out there in the biosphere. And some structures are going to match up with those motifs better than others.

So, any nominations? Have any of you had structural types that seem as if they should be good, but always underperform?

Comments (9) + TrackBacks (0) | Category: Drug Assays | Drug Industry History | Life in the Drug Labs

March 24, 2010

Privileged Scaffolds

Email This Entry

Posted by Derek

Here's a new article on the concept of "privileged scaffolds", the longstanding idea that there seem to be more biologically active compounds built around some structures than others. This doesn't look like it tells me anything I didn't know, but it's a useful compendium of such structures if you're looking for one. Overall, though, I'm unsure of how far to push this idea.

On the one hand, it's certainly true that some structural motifs seem to match up with binding sites more than others (often, I'd say, because of some sort of donor-acceptor pair motif that tends to find a home inside protein binding sites). But in other cases, I think that the appearance of what looks like a hot scaffold is just an artifact of everyone ripping off something that worked - others might have served just as well, but people ran with what had been shown to work. And then there are other cases, where I think that the so-called privileged structure should be avoided for everyone's good: our old friend rhodanine makes an appearance in this latest paper, for example. Recall this this one has been referred to as "polluting the literature", with which judgment I agree.

Comments (11) + TrackBacks (0) | Category: Drug Assays | Drug Industry History

February 10, 2010

Where Would You Start a Company?

Email This Entry

Posted by Derek

We've been talking a lot around here about small companies versus large ones, the merits of different therapeutic areas, and so on. So here's a question: if you were starting a small drug company today, where would you concentrate its efforts?

Oncology? Ten years ago, you could make that case, I think. But now everyone's piled into the area, so you'd have to have a real edge to make a go of it. For one thing, finding patients for clinical trials is a major problem. Your best shot here would be really obscure varieties of cancer, I'd think, unless you've got something really major. And how do you ever know if you've got something really major or not in this area until you get to the clinic, anyway?

Anti-infectives? There's certainly room for some new niche products here, but that's what they're going to be. And this is a surprisingly difficult area to make headway in, if you haven't worked in it before. Nothing's going to be an almighty blockbuster here (because nothing new is going to be a frontline therapy), but there is money to be made.

Cardiovascular and metabolics? I don't see how, and I barely see why, unless you've got the miracle HDL-raising pill up your sleeve. Diabetes, for its part, has been a fine area over the last ten or twenty years, but the safety criteria for a new therapy are now very stiff (and the market is pretty well covered, from several different angles). Not recommended, I'd say.

Alzheimer's? Good luck! Man, is there ever an unserved market here, but it's unserved for a lot of damned good reasons. The same goes for a number of other CNS indications. This whole area is a tightrope of risk and reward. Both are huge.

Or would you go the Genzyme route, making huge amounts by helping out people (a few people) that no one else can help at all? Again, this presupposes that you have some really good idea about how to approach these orphan diseases, and it's going to be tough to make a whole company out of them (since they're spread over such disparate therapeutic specialties). But this would seem to be feasible, with some luck.

Suggestions are welcome in the comments. I'm definitely not planning on starting a company myself, but I think that we need as many as possible, and perhaps some ideas will trigger something for someone in a position to act. . .

Comments (72) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

February 9, 2010

More On Pharma's Ugly Finances

Email This Entry

Posted by Derek

Friday's post has brought in a lot of comments, and they're still piling up. I wanted to address a few of the more frequent ones, though, out here on the front page.

First off, the idea that a bunch of stock analysts could have a useful opinion on a pharma company's return on investment doesn't seem to strike many people as plausible. Variations on "What do they know about this business?" and "Aren't these the same geniuses that wiped out the mortgage bond market?" have come up numerous times. My answer to the latter is no, they aren't. The stock and industry analysts are a different bunch entirely. That's not to say that they can't be stupid, or make mistakes (they do!) But these aren't the people who thought that they had all the risks figured for interest-rate swaps and collateralized debt obligations. If you have disagreements with industry analysts, then you should fight in their territory.

There's more substance to the "What do they know" objection, but still (in my view) not enough. What they know is what's been made public, of course, and as we in the industry know, that's not everything. But that doesn't make Wall Street's case any weaker this time, as far as I can tell. Morgan Stanley and their ilk are not missing any of the successful projects from inside big pharma - those all get aired out thoroughly. If they're short on data, it's on how many projects fail, and how much they cost, and those numbers aren't going to make the ROI look any better. Meanwhile, most all the inlicensed compounds actually get announced, since they're material transactions for someone, so far fewer of those escape notice. I don't like the Morgan Stanley point of view, not at all, but dislike is not a refutation.

Another thing to remember is that the people with the best figures on ROI are the upper management of the companies involved, and these are the people who are slashing head count and outsourcing wherever they can. And we have to make a distinction here, between diagnosis and treatment. We can disagree on whether this is the proper response (although I'm kind of stuck for alternatives), but is it still possible to argue that these CEOs and the like are reacting to something that isn't there? Something is precipitating a lot of large, painful, and nasty decisions, and I think that it's probably the very concerns about cost that we've been talking about. We need to separate the argument about whether those figures are real from the argument about what's been done in response.

Comments (75) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

February 8, 2010

Polluting the Literature with PAINs

Email This Entry

Posted by Derek

There's an article out from a group in Australia on the long-standing problem of "frequent hitter" compounds. Everyone who's had to work with high-throughput screening data has had to think about this issue, because it's clear that some compounds are nothing but trouble. They show up again and again as hits in all sorts of assays, and eventually someone gets frustrated enough to flag them or physically remove them from the screening deck (although that last option is often a lot harder than you'd think, and compound flags can proliferate to the point that they get ignored).

The larger problem is whether there are whole classes of compounds that should be avoided. It's not an easy one to deal with, because the question turns on how you're running your assays. Some things are going to interfere with fluorescent readouts, by absorbing or emitting light of their own, but that can depend on the wavelengths you're using. Others will mung up a particular coupled assay readout, but leave a different technology untouched.

And then there's the aggregation problem, which we've only really become aware of in the past few years. Some compounds just like to stick together into huge clumps, often taking the assay's protein target (or some other key component) with them. At first, everyone thought "Ah-hah! Now we can really scrub the screening plates of all the nasties!", but it turns out that aggregation itself is an assay-dependent phenomenon. Change the concentrations or added proteins, and whoomph: compounds that were horrible before suddenly behave reasonably, while a new set of well-behaved structures has suddenly gone over to the dark side.

This new paper is another attempt to find "Pan-Assay Interference" compounds or PAINs, as they name them. (This follows a weird-acronym tradition in screening that goes back at least to Vertex's program to get undesirable structures out of screening collections, REOS, for "Rapid Elimination of, uh, Swill"). It will definitely be of interest to people using the AlphaScreen technology, since it's the result of some 40 HTS campaigns using it, but the lessons are worth reading about in general.

What they found was that (as you'd figure) that while it's really hard to blackball compounds permanently with any degree of confidence, the effort needs to be made. Still, even using their best set of filters, 5% of marketed drugs get flagged as problematic screening hits - in fact, hardly any database gives you a warning rate below that, with the exception of a collection of CNS drugs, whose properties are naturally a bit more constrained. Interestingly, they also report the problematic-structure rate for the collections of nine commercial compound vendors, although (frustratingly) without giving their names. Several of them sit around that 5% figure, but a couple of them stand out with 11 or 12% of their compounds setting off alarms. This, the authors surmise, is linked to some of the facile combinatorial-type reactions used to prepare them, particularly ones that leave enones or exo-alkenes in the final structures.

So what kinds of compounds are the most worrisome? If you're going to winnow out anything, you should probably start with these: Rhodanines are bad, which doesn't surprise me. (Abbott and Bristol Myers-Squibb have also reported them as troublesome). Phenol Mannich compounds and phenolic hydrazones are poor bets. And all sort of keto-heterocycles with conjugated exo alkenes make the list. There are several other classes, but those are the worst of the bunch, and I have to say, I'd gladly cross any of them off a list of screening hits.

But not everyone does. As the authors show, there are nearly 800 literature references to rhodanine compounds showing biological effects. A conspicuous example is here, from the good folks at Harvard, which was shown to be rather nonspecifically ugly here. What does all this do for you? Not much:

"Rather than being privileged structures, we suggest that rhodanines are polluting the scientific literature. . .these results reflect the extent of wasted resources that these nuisance compounds are generally causing. We suggest that a significant proportion of screening-based publications and patents may contain assay interference hits and that extensive docking computations and graphics that are frequently produced may often be meaningless. In the case of rhodanines, the answer set represents some 60 patents and we have found patents to be conspicuously prevalent for other classes of PAINS. This collectively represents an enormous cost in protecting intellectual property, much of which may be of little value. . ."

Comments (11) + TrackBacks (0) | Category: Drug Assays | Drug Industry History | The Scientific Literature

February 5, 2010

Sheer Economics: How We Got in This Fix

Email This Entry

Posted by Derek

I hate to do another post on this subject, after a good part of the week has been devoted to layoff news and the like, but this one is too much to ignore. A reader sent along this link, which quotes a Morgan Stanley appraisal of the pharma industry as an investment. Here's what they're telling their clients:

". . .Still significant value in Pharma - we see material upside to ROIC [return on invested capital], earnings and multiples as Pharma withdraws from most internal small-molecule research and reallocates capital to in-licensing and other non-pharma assets. Worsening generic pressure and R&D management changes lead us to expect material cuts to internal small research spend (~40% total R&D) in 2010/11, after a decade of dismal internal R&D returns. We expect AstraZeneca and Sanofi-Aventis to be among the leaders in externalizing research, and this is a key driver of our upgrade of AstraZeneca today to Overweight.

Reinvestment of internal research savings into in-licensing will yield three times the likely return, we calculate. Under in-licensing deals, downside risk for pharma companies is currently materially lower than for internally developed drugs. Although upside is also capped by pay-aways and milestone obligations, the net present value of these payments is more than offset by the lower risk-adjusted invested capital. Over one-third of pharma R&D spend is in pre-phase II, where the probability of reaching the market is <10%. our proprietary analysis indicates that, unless the probability of an in-house molecule reaching the market is 30% or more, the risk-adjusted economic value added, or eva, is three times higher under the external research model, with a greater predictability."

It could be said in fewer words, but it's all there. If you're looking for the reason the big companies are doing what they're doing, look no further. Agree with it or not, there's a case to be made - and there's Morgan Stanley, making it - that the cost of running new drug projects in big pharma is just too high relative to the risks of failure. Those returns, in fact, are calculated to be off by a factor of three.

You may not believe that factor, and I have to say, I found it hard to believe myself. But let's say the Morgan Stanley folks have their numbers off. Perhaps it's only twice as profitable to bring in outside drugs as it is to develop them internally. Don't believe that one, either? Maybe it's only 25% more profitable - can you imagine making a move that would increase your company's return on investment by 25%? Industries get remade by such changes at the margin, and this one is remaking ours. Why do we have any internal R&D left at all, if those figures are anywhere near right?

Well, no one's tried to run a large company entirely by in-licensing, and I think that there are a lot of reasons why that wouldn't work. (For one thing, I don't think that there are enough things to in-license, and if one or more large companies announced that they were doing that exclusively, the price of each deal would go right up). And there needs to be some internal expertise left, if only to evaluate those external drug candidates to make sure you're not being taken. But still. All this means is that internal R&D will stay around, but it has to get cheaper and will very likely get smaller.

We can argue about the assumptions behind all this, but there's no doubt that a compelling business case can be made for this world view. Anyone who wants to argue differently - and a lot of us do - will have to come up with solid numbers and reasoning for why it just ain't so. I'm not sure such numbers exist.

There are many corollaries to this line of thought. One of them - and I hate to bring this up, considering all the horrible layoff news recently - is that one of the most psychologically comforting theories that we in R&D have for our present fix is likely wrong. I refer to the "Evil Clueless MBA CEO" theory, which has its satisfactions, but is a hazardous way to think. It is always dangerous to assume that people who do things you disagree with are doing them because they're just idiots or because they're innately malicious. In general, I'd say that the first explanation to jettison is malice, followed by stupidity (Hanlon's Razor). What that leaves you with is that these actions, stupid and malicious though they may appear, are probably being done for reasons that appear valid to the people doing them. I know, I know - some of these reasons are things like "So I can keep my high-paying CEO job", and we can't ignore that one. But a good way to lose a high-paying CEO job is to try to tell your board of directors (and your shareholders) why you're going to pass up an opportunity to get three times your ROIC.

Another thing to think about is, if these cost estimates are right, how did we get here? The best reason I can think of for such a disparity is that small companies (the source of these in-licensed drugs and projects) are often betting their entire existence on these ideas. They are very strongly motivated to do whatever they can do to get them to work (sometimes a bit too motivated, but that risk is already factored in), and if things don't pan out, they usually disappear. Basically, the in-licensing world unloads the risk from the large pharma company (and its shareholders) onto the investors in the smaller ones. The cost disparity will exist for as long as people are willing to back smaller companies. Now, this isn't to say that the big companies are always going to do a great job picking what to bring in. We've been talking a lot, for example, about the GSK-Sirtris deal, and that one may or may not work out. But the idea of doing big in-licensing deals in general - that's a different story, no matter how any individual company manages to execute it.

What that also means is that more of us are going to end up working for those smaller companies (which is something that I, and several commenters around here, have been saying for a while). If the large pharma outfits are going to devote more money to in-licensing, there will then be more opportunities for people developing things for them to in-license. The rough part is that all these structural changes in the drug industry are taking place (largely by coincidence, I think) during economic conditions which make funding such companies difficult.

And then there's the internal cost-cutting, for the R&D that's actually staying at the big companies. That, of course, generally means sending a lot of it to China, or wherever else it can be done more cheaply. And that's going to continue as long as it can indeed be done more cheaply, which means "not forever". Costs are already rising in China and India, although they have a good ways to go before they catch up to the US and Europe. I know that we can argue about how well that whole idea is going to work - there are clearly inefficiencies to doing a lot of your work through outsourcing, but as long as those don't eat up all the cost savings, it's still going to keep happening.

This, as a side note, is why I think that one of the suggestions that gets floated here in the comments from time to time, the idea of forming a "medicinal chemist's union", is completely useless. Unions form when workers have the leverage to preserve a higher-cost business model. In the end, the big industrial concerns of the early 20th century had to have workers, and they had to have them in certain locations, so the unions always had the threat of going on strike. At attempt to lower the boom under these conditions would result in everything going to China, and damned quickly.

So. . .what's happening to us, and to our industry, is not really mysterious. Our cost structure does not look to be supportable, and since there are cheaper alternatives that appear to be feasible, those will get tried. The disruption and destruction that all this is causing is real, of course. But the best I can offer is to try to understand what's driving all this upheaval, because that might help people to figure out how to protect their own jobs or where to jump next. Everyone has to give this some serious thought, because I don't see any reason why all this won't keep going on for some time to come.

Comments (109) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

January 4, 2010

Remember Apo-A1 Milano? Pfizer Does.

Email This Entry

Posted by Derek

The folks over at the In Vivo Blog will soon be announcing their "Deal of the Year" in the biotech/pharma sector (you can scroll back over there to see the various nominees). But they could just as well run the competition in reverse, and award some retroactive Bad Deal statues based on what's been happening recently.

One of those might well go to the 2003 deal in which Pfizer paid over a billion dollars in to acquire Esperion and their Apo-A1 Milano lipoprotein. If you've been following the cardiovascular field for a few years, you'll remember the big press that this got. The Milan variant of the protein seemed to be quite effective at reverse cholesterol transport - just typing that phrase takes me back a few years, to be honest. The hope was that periodic treatments might flush the arteries out and avert atherosclerosis.

And there things seemed to stay, hung up in that "promising therapy" zone. At the time, Pfizer was going to be the biggest thing ever in cardiovascular, what with Lipitor, with their CETP inhibitor torcetrapib, and with Apo-A1 Milano coming along at the same time. That dream is a pile of wreckage now, of course - Pfizer has de-emphasized the whole area. Esperion itself was spun back out in 2008 as a much smaller operation, minus the lipoprotein it came in with, and now Apo-A1 Milano itself has been sold off to The Medicines Company. For $10 million up front.

Yep, Pfizer gets $0.01 billion back from its $1.25 billion investment - well, more if things work out, but you'd have to think that most of that money is just gone. But I can't really say that this is just Pfizer's own problem, or just their own folly. This sort of thing can happen to any organization, and the larger it is, the more likely it is to make some sort of Big Move which then sends it falling down the stairs. After all, if you're trying to affect the future of a huge company, you have to do huge things, right? And these huge things take on a momentum of their own - witness another Pfizer disaster, Exubera. That inhaled insulin was going to be a billion-dollar drug, no question about it, and no one could tell the company any different. Well, except their customers.

But again, I don't see these things as coming from some particularly Pfizery mindset. Any other drug company of that size would probably have done things equally catastrophic, and as they get larger, the others surely will find their own open manholes to step confidently into. Since this is the first post of the new year, here's a resolution I wish the industry would consider: no big mergers in 2010. No gigantic sense-of-urgency do-this-deal-now productions, please. Let's try to do what we do better, rather than just do more of it.

Comments (24) + TrackBacks (0) | Category: Cardiovascular Disease | Drug Industry History

December 14, 2009

The Cost of New Drugs

Email This Entry

Posted by Derek

I'm continuing my look at Bernard Munos' paper on the drug industry, which definitely repays further study (previous posts here, here, and here). Now for some talk about money - specifically, how much of it you'll need to find a new drug. The Munos paper has some interesting figures on this question, and the most striking figure is that the cost of getting a drug all the way to the market has been increasing at an annual rate of 13.4% since the 1950s. That's a notoriously tough figure to pin down, but it is striking that the various best estimates of the cost make an almost perfectly linear log plot over the years. We may usefully contrast that with the figures from PhRMA that indicate that large-company R&D spending has been growing at just over 12% per year since 1970. Looking at things from that standpoint, we've apparently gotten somewhat more efficient at what we do, since NME output has been pretty much linear over that time.

But that linear rate of production allows Munos to take a crack at a $/NME figure for each company on his list, and he finds that less than one-third of the industry has a cost per NME of under $1 billion dollars, and some of them are substantially more. Of course, not every NME is created equal, but you'd have to think that there are large potential for mismatches in development cost versus revenues when you're up at these levels. Munos also calculates that the chance of a new drug achieving blockbuster status is about 20%, and that these odds have also remained steady over the years - this despite the way that many companies try to skew their drug portfolios toward drugs that could sell at this level.

How much of these costs are due to regulatory burden? A lot, but for all the complaining that we in the industry do about the FDA, they may, in the long run, be doing us a favor. Citing these three studies, Munos says that:

. . .countries with a more demanding regulatory apparatus, such as the United States and the UK, have fostered a more innovative and competitive pharmaceutical industry. This is because exacting regulatory requirements force companies to be more selective in the compounds that they aim to bring to market. Conversely, countries with more permissive systems tend to produce drugs that may be successful in their home market, but are generally not sufficiently innovative to gain widespread approval and market acceptance elsewhere. This is consistent with studies indicating that, by making research more risky, stringent regulatory requirements actually stimulate R&D investment and promote the emergence of an industry that is research intensive, innovative, dominated by few companies and profitable.

But this still leaves us with a number of important variables that we don't seem to be able to push much further - success rates in the clinic and in the marketplace, money spent per new drug, and so on. And that brings up the last part of the paper, which we'll go into next time: what is to be done about all this?

Comments (17) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

December 11, 2009

Another Take on the Munos Paper

Email This Entry

Posted by Derek

Eric Milgram over at PharmaConduct has an excellent post up on the same paper I've been discussing this morning. As another guy who's been around the block a few times in this industry, he's struck by many of the same points I am (to the point of also linking to Wikepedia's page on Poisson distributions!)

And he has some interesting data of his own to present, too - well worth checking out.

Comments (10) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

Munos On Big Companies and Small Ones

Email This Entry

Posted by Derek

So that roughly linear production of new drugs by Pfizer, as shown in yesterday's chart, is not an anomaly. As the Bernard Munos article I've been talking about says:

Surprisingly, nothing that companies have done in the past 60 years has affected their rates of new-drug production: whether large or small, focused on small molecules or biologics, operating in the twenty-first century or in the 1950s, companies have produced NMEs at steady rates, usually well below one per year. This characteristic raises questions about the sustainability of the industry's R&D model, as costs per NME have soared into billions of dollars.

What he's found, actually, is the NME generation at drug companies seems to follow a Poisson distribution, which makes sense. This behavior is found for systems (like nuclear decay in a radioactive sample) where there are a large number of possible events, but where individual ones are rare (and not dependent on the others). A Poisson process also implies that there's some sort of underlying average rate, and that the process is stochastic - that is, not deterministic, but rather with a lot of underlying randomness. And that fits drug development pretty damned well, in my experience.

But that's just the sort of thing, as I've pointed out, that the business-trained side of the industry doesn't necessarily want to hear about. Modern management techniques are supposed to quantify and tame all that risky stuff, and give you a clear, rational path forward. Yeah, boy. The underlying business model of the drug industry, though, as with any fundamentally research-based industry, is much more like writing screenplays on spec or prospecting for gold. You can increase your chances of success, mostly by avoiding things that have been shown to actively decrease them, and you have to continually keep an eye out for new information that might help you out. But you most definitely need all the help you can get.

As that Pfizer chart helps make clear, Munos is particularly not a fan of the merge-your-way-to-success idea:

Another surprising finding is that companies that do essentially the same thing can have rates of NME output that differ widely. This suggests there are substantial differences in the ability of different companies to foster innovation. In this respect, the fact that the companies that have relied heavily on M&A tend to lag behind those that have not suggests that M&A are not an effective way to promote an innovation culture or remedy a deficit of innovation.

In fact, since the industry as a whole isn't producing noticeably more in the way of new drugs, he suggests that one possibility is that nothing we've done over the last 50 years has helped much. There's another explanation, though, that I'd like to throw out, and whether you think it's a more cheerful one is up to you: perhaps the rate of drug discovery would actually have declined otherwise, and we've managed to keep it steady? I can argue this one semi-plausibly both ways: you could say, very believably, that the progress in finding and understanding disease targets and mechanisms has been an underlying driver that should have kept drug discovery moving along. On the other hand, our understanding of toxicology and our increased emphasis on drug safety have kept a lot of things from coming to the market that certainly would have been approved thirty years ago. Is it just that these two tendencies have fought each other to a draw, leaving us with the straight lines Munos is seeing?

Another important point the paper brings up is that the output of new drugs correlates with the number of companies, better than with pretty much anything else. This fits my own opinions well (therefore I think highly of it): I've long held that the pharmaceutical business benefits from as many different approaches to problems as can be brought to bear. Since we most certainly haven't optimized our research and development processes, there are a lot of different ways to do things, and a lot of different ideas that might work. Twenty different competing companies are much more likely to explore this space than one company that's twenty times the size. Much of my loathing for the bigger-bigger-bigger business model comes from this conviction.

In fact, the Munos paper notes that the share of NMEs from smaller companies has been growing, partly because the ratio of big companies to smaller ones has changed (what with all the mergers on the big end and all the startups on the small end). He advances several other possible reasons for this:

It is too early to tell whether the trends of the past 10 years are artefacts or evidence of a more fundamental transformation of the drug innovation dynamics that have prevailed since 1950. Hypotheses to explain these trends, which could be tested in the future, include: first, that the NME output of small companies has increased as they have become more enmeshed in innovation networks; second, that large companies are making more detailed investigations into fundamental science, which stretch research and regulatory timelines; and third, that the heightened safety concerns of regulators affect large and small companies differently, perhaps because a substantial number of small firms are developing orphan drugs and/or drugs that are likely to gain priority review from the FDA owing to unmet medical needs.

He makes the point that each individual small company has a lower chance of delivering a drug, but as a group, they do a better job for the money than the equivalent large ones. In other words, economies of scale really don't seem to apply to the R&D part of the industry very well, despite what you might hear from people engaged in buying out other research organizations.

In other posts, I'll look at his detailed analysis of what mergers do, his take on the (escalating) costs of research, and other topics. This paper manages to hit a great number of topics that I cover here; I highly recommend it.

Comments (41) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Who Discovers and Why

December 10, 2009

Pfizer's R&D Productivity

Email This Entry

Posted by Derek

Courtesy of Bernard Munos, author of the Nature Reviews article that I began blogging about yesterday, comes this note about Pfizer's track record with new molecules. His list of Pfizer NMEs since 2000 is Geodon (ziprasidone, 2001), Vfend (voriconazole, 2002, from Vicuron - whoops, not so, this one's Pfizer's), Relpax (eletriptan, 2002), Somavert (pegvisomant, 2003, from Pharmacia & Upjohn), Lyrica (pregabalin, 2004, from Warner Lambert), Sutent (sunitinib, 2006, from Sugen/Pharmacia), Chantix (varenicline, 2006), Selzentry (maraviroc, 2007), and Toviaz (fesoterodine, 2008, from Schwarz Pharma). There are some good drugs on that list, but considering that even just five years ago, the company was claiming that it had 101 NMEs in development, and was going to file 20 NDAs by now, it might seem a bit thin.
Pfizer%20graph%20fixed.jpg

It might especially seem that way when you look over this graph, also provided by Munos (but not used in his recent article). You can see that Pfizer's R&D spending has nearly tripled since the year 2000, but that cumulative NME line doesn't seem to be bending much. And, as Munos points out, two (and now three) productive research organizations have been taken out along the way to produce these results. It is not, as they say, a pretty picture.

Comments (49) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

December 9, 2009

Drug Companies Since 1950

Email This Entry

Posted by Derek

There's a data-rich paper out in Nature Reviews Drug Discovery on the history of drug innovation in the industry. I'll get to its real conclusions in another upcoming post, but some of the underlying data are worth a post of their own.

The author (Bernard Munos of Lilly) looks at new drug approvals (NMEs) since 1950, and finds:

At present, there are more than 4,300 companies that are engaged in drug innovation, yet only 261 organizations (6%) have registered at least one NME since 1950. Of these, only 32 (12%) have been in existence for the entire 59-year period. The remaining 229 (88%) organizations have failed, merged, been acquired, or were created by such M&A deals, resulting in substantial turnover in the industry. . .Of the 261 organizations, only 105 exist today, whereas 137 have disappeared through M&A and 19 were liquidated.

At the high end of the innovation scale, 21 companies have produced half of all the NMEs that have been approved since 1950, but half of these companies no longer exist. . .Merck has been the most productive firm, with 56 approvals, closely followed by Lilly and Roche, with 51 and 50 approvals, respectively. Given that many large pharmaceutical companies estimate they need to produce an average of 2–3 NMEs per year to meet their growth objectives, the fact that none of them has ever approached this level of output is concerning.

Indeed it is - either those growth targets are unrealistic, or the number of new drugs thought to be needed to support them has been overestimated, or we're all in some trouble. Speculation welcomed - I lean toward the growth targets being hyped up to please investors, but I'm willing to be persuaded.

And the fact that most of the new drugs come from a much smaller list of companies should be no surprise - that looks like a perfect example of a power law (aka "long tail") effect. Given the way research works, I'd actually be surprised if it were any other way.Now about that figure of 4,300 companies, though: what could possibly be on it? All sorts of startups that I've never heard of, of course - but how can that account for such a large number?

It appears to come from this PDF, where it appears on slide 114 (whew). There's no listing, just a breakdown of 1450 companies in the US, 450 in Canada, 1600 in Europe, and 740 in the Asia/Pacific region. Anyone want to hazard any guesses about how real those numbers are?

Comments (18) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

December 7, 2009

Why Don't We Have More Protein-Protein Drug Molecules?

Email This Entry

Posted by Derek

Almost all of the drugs on the market target one or more small-molecule binding sites on proteins. But there's a lot more to the world than small-molecule binding sites. Proteins spend a vast amount of time interacting with other proteins, in vital ways that we'd like to be able to affect. But those binding events tend to be across broader surfaces, rather than in well-defined binding pockets, and we medicinal chemists haven't had great success in targeting them.

There are some successful examples, with a trend towards more of them in the recent literature. Inhibitors of interactions of the oncolocy target Bcl are probably the best known, with Abbott's ABT-737 being the poster child of the whole group.

But even though things seem to be picking up in this area, there's still a very long way to go, considering the number of possible useful interactions we could be targeting. And for every successful molecule that gets published, there are surely an iceberg of failed attempts that never make the literature. What's holding us back?

A new article in Drug Discovery Today suggests, as others have, that our compound libraries aren't optimized for finding hits in such assays. Given that the molecular weights of the compounds that are known to work tend toward the high side, that may well be true - but, of course, since the amount of chemical diversity up in those weight ranges is ridiculously huge, we're not going to be able to fix the situation through brute-force expansion of our screening libraries. (We'll table, for now, the topic of the later success rate of such whopper molecules).

Some recent work has suggested that there might be overall molecular shapes that are found more often in protein-protein inhibitors, but I'm not sure if everyone buys into this theory or not. This latest paper does a similar analysis, using 66 structurally diverse protein-protein inhibitors (PPIs) from the literature compared to a larger set (557 compounds) of traditional drug molecules. The PPIs tend to be larger and greasier, as feared>. They tried some decision-tree analysis to see what discriminated the two data sets, and found a shape description and another one that correlated more with aromatic ring/multiple-bond count. Overall, the decision tree stuff didn't shake things down as well as it does with data sets for more traditional target classes, which doesn't come as a surprise, either.

So the big questions are still out there: can we go after protein-protein targets with reasonably-sized molecules, or are they going to have to be big and ugly? And in either case, are there structures that have a better chance of giving us a lead series? If that's true, is part of the problem that we don't tend to have such things around already? If I knew the answers to these questions, I'd be out there making the drugs, to be honest. . .

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Industry History | In Silico

November 11, 2009

Against Panic

Email This Entry

Posted by Derek

With the waves of layoffs going on, and all the nasty structural changes we're seeing in this business, it's easy to start feeling a toxic combination of fear and despair. And while I understand that, I'm going to try to briefly argue against it.

(1) I think that, in the years to come, that people are most definitely going to need medicines. And by that, I mean new ones, because there are a lot of conditions out there that we can't treat very well. As the world gets (on the average) older and wealthier, this need will do nothing but increase. In many cases, pharmaceutical treatment is cheaper than waiting and having surgery or the like, so there's a large scale cost-saving aspect to this, too.

(2) I also think that many of these medicines are still going to be small molecules. Now, biological products can be very powerful, and can do things that we can't (as yet) do with small molecules - mind you, the reverse is true, too. And I think that biologics will gradually increase their share of the pharma world as we find out more about how to make and administer them. But it is very hard to beat an orally administered small molecule for convenience, cost, and patient compliance, and those are three very big factors.

(3) What we're witnessing now is a huge argument about how we're going to make those small molecule drugs, where we're going to make them, and who will do all those things. And it's driven by money, naturally. We don't have enough new products on the market, which means that we have to sell the ones we have like crazy (which leads to all sorts of other problems, legal and otherwise). At the same time, we're having to spend more and more money to try to get what drugs we can through the whole process. These trends appear unsustainable, especially when running at the same time.

(4) But as Herbert Stein used to say, if something can't go on, then it won't. Right now, the only way out that companies can see is to cut costs as hard as possible (and market as hard as possible). Those both bring in short-term results that you can point at. Long-term, well. . .probably not so good. But in that same long term, we're going to have to find better ways of discovering and developing drugs. If we can improve that process, the fix can come from that direction rather than from the budget-cutting one.

(5) And those improvements don't have to be incredible to make a big difference. We have a 90% failure rate in the clinic as it stands. If we could just work it to where we only lose 8 out of 10 drug candidates, that would double the number of new drugs coming to the market, which would cheer everyone up immensely.

(6) The questions are: can we improve R&D in time? Can we improve it with the resources we have? I think that the demand (and thus the potential rewards) is too great for a solution not to be found, if there's one out there. And we still know so little about what we do that I can't imagine that answers aren't out there somewhere. Who's going to find them? How long will it take? Where are they? I've no clue. But that looks like the way out to me.

Comments (32) + TrackBacks (0) | Category: Business and Markets | Current Events | Drug Industry History

November 6, 2009

Thoughts on What Used to Be Schering-Plough

Email This Entry

Posted by Derek

So what are we up to now, Day Three of Greater Merck? The merger with Schering-Plough went through earlier this week, and you won't get any more numbers by searching the stock tickers for SGP.

I find that weird, since I started my career there in the late 1980s/early 1990s. But while I was there, it seemed like there were mergers and rumors of mergers every few weeks. That's no doubt a hindsight-enhanced picture I have, but it's safe to say that I heard about S-P merging (or being purchased by) every single major player in the business during my years there. And it didn't happen (not then, at any rate).

My favorite moment came in about 1992 when a colleague came to my office one afternoon saying "It's us and Upjohn. Announced after the close of business on Friday. All of CNS is going to Kalamazoo". I hardly even looked up, uttering a one-word reply that compared this news flash to bovine waste.

"Why do you say that?", he replied. "You don't think it could happen?" "Of course I thing it could happen", I said. "But I'll bet against any specific prediction of when and who. Got any money on you?" "Why don't you think this is the real thing?" he asked again, to which I replied "Because I don't think that any deal this size, set to be announced on Friday, could be so screwed up that you and I would know about it on Tuesday afternoon".

"Well, I kind of see your point there. . .", he began. And of course that particular deal never happened. But I'm sure that there were others that nearly did. That's one of the things that goes on in the background of this industry - there are a lot of tentative discussions and what-if ideas that get looked at briefly (or sometimes not so briefly) which people outside of upper management never hear about. This stuff generally starts to leak (if it does) once it gets closer to really happening, and for every one that happens, there are several that get thought about but never quite work.

Of course, I'm using "work" in the sense of "get completed", not in the sense of "works out in the long run to the benefit of everyone involved". I'm not convinced that many drug company mergers fall into that latter category at all, and that goes for the Merck/Schering-Plough one, too. There don't seem to be any dramatic announcements coming out of the deal so far, and that probably means that the changes (which are, and have to be, coming) will just be delayed while the company takes stock of what it now has, and what it now is.

But, as someone from another company was saying to me last night, the bigger you are, the harder it is to do that. It takes longer before you feel that there's enough information to make a good decision, which is probably why Pfizer's current rearrangements are taking so agonizingly long to make themselves clear. That same decision-making extends, I think, to drug discovery and development issues, which is one reason I don't like the whole mega-company idea to start with.

There's also the groupthink problem. Pfizer, for example, was able to convince itself that inhaled insulin was going to be a big winner, even as people outside the company wondered if that could be quite right. (And not only was it not a big seller, it was an unprecedented disaster). I don't believe that people get any smarter in large groups. Quite the contrary. All that "wisdom of crowds" stuff, as I understand it, is about consulting large numbers of individual thinkers, not getting them all into one room and having them agree on something. Especially if some of the people in the room can decide the salaries and promotions of the rest of the crowd.

I wish both the Merck people and the Schering-Plough people well, and the combined company good fortune, and that's not just because I find myself a stockholder of it. But I wish it hadn't come to this, and I wish it wouldn't keep coming to this, either.

Comments (20) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

November 2, 2009

In Which You Get to Hear the Phrase "Hatch-Waxman" Again

Email This Entry

Posted by Derek

There's a constant running battle in the drug industry between the two kinds of pharmaceutical companies: the ones who discover the drugs first, and the ones who sell the drugs cheaply after the patents have expired. It surprises me still how many people I run into (outside my work) who don't make that distinction, or who don't even realize that there is one.

But the generic industry is a very different place. Their research budgets are far smaller than the ones at the discovery companies, since they're only dealing with drugs that everyone knows to already work. Their own research is directed toward satisfying the regulatory requirements that they're making the equivalent substance, and to finding ways to make it as cheaply as possible. And some of them are very good at it - some ingenious syntheses of marketed drugs have come out of the better generic shops. Of course, some real head-shaking hack work has, too, but that you can find everywhere.

The tension between the two types of company is particularly acute when a big-selling drug is nearing its patent expiration. It's very much in the interest of the generic companies to hurry that process along, so often they challenge the existing patents on whatever grounds they can come up with, figuring that the chances of success jutify the legal expenses. Since the 1984 Hatch-Waxman act, there's been an even greater incentive, the so-called "Paragraph IV" challenge. A recent piece in Science now makes the case that this process has gotten out of control.

After four years of a drug's patent life, a generic company can file an Abbreviated New Drug Application (ANDA) and challenge existing patents on the grounds that they're either invalid or that the ANDA doesn't infringe them. (This, for example, is what happened when Teva broke into Merck's Fosamax patent, taking the drug generic about four years early). If the challenge is successful, which can take two or three years to be resolved, the generic company gets an extra bonus of 180 days of exclusivity. The authors of the Science piece say that this process is tipped too far toward the generic side, and it's cutting too deeply into the research-based companies. (As noted here, that's rather ironic, considering the current debate about such provisions for biologic drugs, where some parties have been citing the Hatch-Waxman regime as a wonderful success story in small molecules).

This all took a while to get rolling, but the big successes (such as the Fosamax example) have bred plenty of new activity. There are now five times as many Paragraph IV challenges as there were at the beginning of the decade. Teva, for example, which is one of the big hitters in the generic world, had 160 pending ANDAs in 2007, of which 92 were running under Paragraph IV. Here's a look at some recent litigation in the area, which has certainly enriched various attorneys, no matter what else it's done.

Under Hatch-Waxman, a new drug starts off with five years of "data exclusivity" during which a generic version can't be marketed. The Science authors argue that the losses from Paragraph IV now well outweigh the gains from this provision, and that the term should be extended (which would put it closer to those found in Europe, Canada, and Japan. They also bring up the possibility of selectively extending data exclusivity case-by-case or for certain therapeutic areas, but I have to say, this makes me nervous. There are too many opportunities for gamesmanship in that sort of system, and I think that one goal of a regulatory regime should be to make it resistant to that sort of thing.

But I do support the article's main point, which is that the whole generic industry depends on someone doing to the work to discover new drugs in the first place, and we want to make sure that this engine continues to run. Politically, though, anything like this will be a very hard sell, since it'll be easy to paint it as a Cynical Giveaway to the Rapacious and Hugely Profitable Drug Companies. But speaking as someone working for the RHPDCs, I can tell you that we are indeed having a tougher time coming up with the new products with which to exploit the helpless masses. . .

Comments (23) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Drug Prices | Patents and IP | Regulatory Affairs

October 30, 2009

Fifty Years of Scientific History For You

Email This Entry

Posted by Derek

Here's a most interesting graph from the latest issue of Nature Reviews Drug Discovery. It's from an article on trying to discern trends from broad-scale literature analysis, and it's worth a separate blog post of its own (coming shortly). But after yesterday's discussion of whether there are too many graduates in science and engineering, this looked useful.
big%20graph.jpg
Note, for example, the ramp up in NIH funding in the late 1950s/ early 1960s (a very large change in percentage terms), which was followed by a similar surge in doctorates granted. The late-1990s funding increases seem to be having a similar effect near the end of the chart.

Note also the well-publicized drug drought - but the historical perspective is interesting. We've clearly fallen off the 1970-2000 trend line of increasing drug approvals, but we seem to be stabilizing at roughly a 1980s level. The argument is whether that's where we should be or not. We have all these new tools, but all these new worries. Lots of new targets, but fewer good ones like the old days. Many new tools, but plenty of difficult-to-interpret data generated from them. And so on. But 1985 is apparently about where the balance of all these things is putting us.

Comments (34) + TrackBacks (0) | Category: Business and Markets | Drug Industry History | Who Discovers and Why

October 28, 2009

You Mean You Don't Have to Buy Them?

Email This Entry

Posted by Derek

Johnson & Johnson's CEO has given an interview to the Financial Times explaining his company's strategy with acquisitions. And right now, that strategy is. . .not to make acquisitions. They see partnerships as making a lot more sense:

“The cost of developing compounds has become so high and become so risky that we are looking to share the risks and opportunities and find more and more partnerships.”

J&J has been putting this into practice recently, taking equity stakes in several different companies. In the case of Elan and Crucell, interestingly, the company has agreed to standstill provisions, in order to make it clear that they're not just on the first step to an outright acquisition any time soon. It's interesting that this would be coming from Johnson & Johnson, since in many cases they've been one of the less destructive acquirers in the business already. (Well, with some exceptions, like when they took over Scios).

The temptation to compare this policy with Pfizer's is almost overwhelming, but the two companies are in very different positions. For one thing, J&J has their medical devices and diagnostics businesses, which are both profitable and run on different rhythms than their pharma side. Even more importantly, they also aren't locked into a grow-or-die situation, needing larger and larger infusions of revenue to meet the expenses which get larger every time they go out and buy those revenue streams, which mean that they need to go buy some more and then. . .

The article says that J&J has no deals under consideration right now, but that this style of deal-making is definitely how the company plans to operate. There's definitely enough risk to be spread around - I just hope that there's enough reward for everyone, too.

Comments (19) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

October 21, 2009

O Brave New World! That Has Such Companies In't!

Email This Entry

Posted by Derek

Steve Usdin at BioCentury sent along a reprint of the newsletter's annual "Back to School" issue from last month (available for open access here) in response to my note about "micropharma" the other day. And it's clear that he's been thinking along the same lines. Whether or not this model is going to work is another question, but that looks like something that we're going to be finding out.

As the issue notes, in a pithy quote from Mike Powell of Sofinnova, the key problem is "how to restructure an industry where it costs $100 million to answer a question but people are only willing to pay you $50 million for the answer." Since the amount of money being handed out is probably not going to increase any time soon, the only way out of that dilemma is to find some way for that first figure to go down.

One of the groups that won't be happy about that process are academic centers that are used to seeing their intellectual property as a potentially lucrative source of funds. The strike-it-rich days do not look to be coming back any time soon. Instead, BioCentury advises universities to get ready to adopt a "non-ROI" approach to developing their ideas, by use of grants, public-private consortia, and help from foundations and other nonprofits. (Perhaps a name like "delayed ROI" or, if you're being especially weasely about it, "enhanced ROI", might help that concept go down a bit smoother).

CRO firms are almost certainly going to have to be part of that process, since there are plenty of skills needed to push a drug target or molecule along that are not found in most universities. That, to me, would indicate a real market for a low-cost CRO outfit targeting academia. I'm not sure if anyone is serving that market, or trying to, but it would seem to have some potential in it. Anyone who can help to run should-we-kill-this experiments, without spending too much money getting the answer, will have something that looks to be in demand.

In general, this landscape would mean that ideas will go longer before companies are formed around them, with the idea that they can be tested out a bit without having to build new corporations to do it. (As another quote from the article had it, "The unmet need in the industry is drugs, not companies".) Payoffs will be slower, and they won't be as large when they come, either. Venture capital investors will be asked to have more patience under this model, and that's not something that they're necessarily noted for. And someone's going to have to have the money (and nerve) to form mid-sized organizations that will pick up the best of the things coming out of academia, since many of them still won't be quite ready to go right into a big organization. The non-humungous companies that have survived to this point might step up and fill this role, and BioCentury also suggests that Japanese and Indian companies might fill this space as well.

The big question is: will people be able to put up with this, or not? After all, no one's envisioning failure rates going down, they're just hoping that the failures will happen sooner and cost less money. Will they? It's not like "fail quickly" hasn't been a goal of companies in the business for years now. But sometimes it's hard to fail any other way than slowly (and expensively).

Well, the common theme to all this (and to most of the other crystal-ball reading going on these days) is that the industry isn't going to be able to go on in the way it's been accustomed to. If you ask a hundred people in this business what it's going to look like ten or fifteen years from now, the only thing you could probably get them to agree on is "Not like it does today". We'll just have to wait to see if they're all playing "Cheat the Prophet" or not. . .

Comments (14) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

October 15, 2009

Fall From Grace

Email This Entry

Posted by Derek

A couple of articles have come together and gotten me to thinking. Back during the summer, long-time medicinal chemist Mark Murcko published a short editorial in Drug Discovery Today comemmerating the Apollo 11 moon landing's 40th anniversary:

"People like me, who are old enough to actually remember the events of July 1969, are instantly assailed with powerful and reflexive emotions when we think back to the effect Apollo had on us: the excitement, awe and wonder. My family, like so many others, was obsessed with space exploration. The walls of our den were covered with NASA photos, diagrams and technical bulletins – anything we could get them to send us. Models of rockets hung from the ceiling by fishing line. . .We soaked it all in, and the events of that day remain a seminal memory of my childhood. It was glorious; nothing could possibly be more exhilarating.

And yet...there are some interesting parallels to what all of us, engaged in the roiling tumult of biomedical research, do here and now. Our mission – to invent new therapies that transform human health and alleviate suffering – captures the imagination as profoundly as did Apollo. Our efforts once were regarded with the same admiration as the NASA breakthroughs (and while public perceptions may be different today, our mission has not wavered). We are attempting, one could argue, even more complex technical achievements. . . ."

And just the other day I came across this piece in The New Atlantis entitled "The Lost Prestige of Nuclear Physics". (Via Arts and Letters Daily). Its thesis, which I think is accurate:

"The story of nuclear physics is one of the most remarkable marketing disasters in intellectual history. In the space of a few decades, the public perception of the atom’s promise to serve humanity, and the international admiration that surrounded the many brilliant people who unraveled the mysteries of matter, had collapsed. So pronounced was the erosion of attitudes toward nuclear physics that, by the late 1990s, several European physicists felt it necessary to establish an organization called Public Awareness of Nuclear Science for the explicit purpose of improving the public image of their discipline."

Of course, in that case, there was that little matter of the atomic bomb and the subsequent arms race) to contrast against the excitement of the scientific discoveries and their peaceful uses. One might argue that for the general public, it was all very admirable to be able to figure out the forces that kept atoms together, but when these forces turned out to have such alarming and immediate real-world consequences, the backlash was profound. And while I sympathize with the nuclear physicists, I have to only wish them luck in their attempts to regain a good public image. That's because those consequences are still very much with us, as a glance at the news will show.

But the fall from grace of drug research has been almost as profound, and we've never developed an equivalent of nuclear weapons, have we? In our case, I think the problem has been that we're a business. We bill people for our discoveries when they work. And as I've argued here, people will always have a much more emotional response to any issue that affects their physical health, and can quickly come to resent anyone that charges them money to maintain it. (Doctors, though, benefit from the one-on-one patient relationship. People hate hospitals, hate health insurance companies, and hate drug companies, but still respect their own physicians). This, as manifested by complaints about drug prices, uneasiness about hard-sell advertising, and suspicion about our motivations and our methods, seems to be what's sent public opinion of us into the dumper.

But in the end, Murcko has a point. We really are doing something good for humanity by working on understanding diseases and trying to find treatments for them. Not everything about the process is optimal, for sure, but can anyone argue that the broad effort of pharmaceutical research has been a bad thing? The problem is, it's easy to look around, and slide from there into self-pity. But moaning about how no one appreciates us is a waste of time. The best cure is, as far as I can see, to give people reasons to realize what we're worth.

People who've been pulled back from the brink of death from infectious disease or cancer already have those reasons. But there are so many terrible unmet medical needs still out there, which means that there's plenty of room for us both to do good and to show that we can do good. Yes, it will cost a lot of money to do that, which means that what cures will come will also cost money. But with the partial exception of air to breath, most of the necessities of life tend to involve money changing hands. That's not a disqualification.

So to the readers out there in the industry - go do some good work today. Don't spend too much time in your more useless meetings. Stand up in front of your fume hood or sit down in front of your keyboard and do something worthwhile. It's a worthwhile job, even if some people don't realize that yet.

Comments (45) + TrackBacks (0) | Category: Drug Industry History | Drug Prices | Why Everyone Loves Us

September 17, 2009

The Drug Business: A Turbulent Future?

Email This Entry

Posted by Derek

One of this blog's regular correspondents has just been attending a chemistry outsourcing conference (program here), and heard a very interesting talk from Stefan Loren of a Baltimore investment advisory firm, Westwicke Partners. Loren's a product of the Sharpless lab, who went on to Abbott, then Wall Street (Legg Mason and into the hedge fund business), and had some very provocative things to say about our industry:

His talk, "The Pharma Titanic: It's Time to Root for the Iceberg" presented a sobering view of the challenges that big pharma will have to deal with if it wants to survive.

Loren opened with an overview of the US national health care debate. Regardless of the ultimate form that a national system takes, he believes we'll see mandatory insurance; this will be good for big pharma. He also believes that there will be strong pressure for mandatory comparative effectiveness testing...probably not good for big pharma. Who will pay for this and what resources this would require is another matter. Wearing his investment advisor glasses, he sees global pharma sales declining, led by North America, with future growth coming in Asia and Latin America. He also sees evidence of healthcare avoidance in the US: unfilled prescriptions, unfinished courses of prescriptions, and people just not visiting medical and dental practitioners - not a good trend.

The coming wave of patent expirations of the top 10 drugs will hit big pharma hard. Generics will grow: In 5 to 10 years, he predicts that 80 percent of ALL prescriptions will be generic. When coupled with the meager investments in bow wave research over the past 15+ years, as measured by IPOs, there's trouble ahead. Global biotech IPOs are in the toilet and the US is no longer viewed by the investment community as the global leader in biotech. There have been an unprecedented number of bankruptcies in biotech. There is going to be a huge oversupply of production capacity for small molecule manufacturing. ROIs for pharma and biotech are largely negative...it gets worse. He calls this the "death spiral."

Pharma pipelines are seen as very poorly run and wasteful. Poor projects linger far longer than they should. Too much emphasis is placed on me-too and line extensions. Too much emphasis is placed on acquisitions and licensing rather than innovation. Here it comes: he says "I have NEVER seen a merger that worked" We were then entertained by a chart showing Pfizer's stock market performance over the period of time from pre-WLA, through Pharmacia-Upjohn, and now Wyeth...you would not be a happy camper if you had put your retirement account in Pfizer management's hands and their merger mania. Wall Street has a saying "Two dogs don't make a kennel." Of course, what we hear is "this time it's different" along with the usual happy talk about synergies. Loren does believe that mergers can work and can be synergistic if the two companies merging are small...large mergers just don't work and large companies get paralyzed by bureaucratic inertia.

His solution? Break up large pharma into therapeutic areas and build shared networks between distinct entities. Small organizations can operate far more efficiently in decision making about research directions - use the network to maintain manufacturing efficiencies. Small focused companies will revitalize the industry and offer opportunities for scientists coming out of academia. In response to a question from the audience regarding Merck's ambitions to adopt this networked architecture, he doesn't believe they can make it work.

He does see light at the end of the tunnel with respect to supply chain assurance driving a return to sanity. The heparin, glycerin, and melamine disasters have awakened people and the cost of securing global supply chains is going to make US industry much more competitive. It also will focus serious scrutiny on big pharma. The "next heparin" case will have serious personal consequences for big pharma managers. . ."

Well, a good amount of this I agree with, but some of it I'm not sure about. Taking things in order, I don't know about a decline in US sales, but Asia is most definitely where a lot of companies are expecting growth. (And for "Asia", you could substitute "China" and be within margin of error). And his generic prescription figures may not be right on target, but the trend surely is. We've discovered a lot of useful drugs over the years, and anything new we find has to compete against them. The only way to break out of that situation is to find drugs in new categories entirely, and we all know how easy that is.

But as for the US not being the global leader in biotech - well, if we aren't, then who is? You could possibly make a case for "no clear leader at all, for now", but I think that's as far as I can go. And that coming oversupply of manufacturing for small molecule drugs, which may well be real, will be bad news for the companies that have already invested in that area, of course, but good news for up-and-comers, who will be able to pick up capacity more cheaply.

But Loren's comments about mergers I can endorse without reservation. I've been saying nasty things about big pharma mergers since this blog began, and nothing in the last seven years has changed my mind. And I certainly hope that his idea of smaller companies coming along to revitalize the industry is on target, because it's sure not going to be revitalized by (for example) Pfizer buying more people. I've made that Pfizer stock-chart point of his here, as well - like the rest of the industry, PFE stock had a wonderful time of it in the 1990s, but this entire decade it's been an awful place to have your money.

I expect these comments to bring in a lot of comments of their own - so, how much of this future are you buying?

Comments (23) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History | Regulatory Affairs

September 4, 2009

Sepracor: A Desirable Property?

Email This Entry

Posted by Derek

Well, I didn't see this one coming. Dainippon Sumitomo has announced that they're buying Sepracor. My first thought on reading this was "Are they sure they want to do that?"

I say that because the ostensible reason that the Japanese company is pulling out their wallet is that they're looking to replace declining revenues at home. In that case, why are they buying declining revenues over here? Their flagship product (Lunesta) is going to be going off patent in the not-too-distant future, and they don't have a gigantic pipeline of stuff behind it.

The answer seems to be a deficiency that many Japanese firms have felt: a lack of boots-on-the-ground sales staff over here. The US is the biggest single profit center for the worldwide drug industry, and it's impossible for a big company to ignore that. But realizing all those potential profits isn't easy, if you're coming in from a standing start. (It's not like Dainippon Sumitomo has a big profile over here). Says the Boston Globe:

In a note to investors on the sale, Credit Suisse analyst Scott Hirsch said the deal made sense for Sepracor. He noted that the company is generating $300 million to $400 million in cash a year but has a limited pipeline of new drugs in development and its existing products will face competition from generic drugs in coming years. Hirsch also doubted another suitor would step forward with a better bid.

“In our view, if a US firm wanted Sepracor, that likely would’ve happened already, as there have been plenty of lookers over the years,’’ said Hirsch, who has a neutral rating on the stock. “We think Dainippon Sumitomo is more interested in the sales platform and operating leverage than the revenue stream.’’

So where does that leave Sepracor's research operations? It's true that Takeda has apparently been very kind to Millennium's research staff, but that was a more research-driven deal than this one seems to be. I'm sure the folks at Sepracor are looking for a little more clarity on that question. The problem is, the company's revenues have come almost entirely from clever (albeit irritating) patent-busting moves (active metabolites, pure enantiomers, and so on), but these strategies ran out of gas some time ago as the rest of the industry tightened up its IP protection. Rightly or not, Sepracor doesn't have a reputation as an outfit with a lot of great in-house research ideas. Outside of a ready-made sales force, what exactly do they have to offer?

Comments (7) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

August 27, 2009

Rings of the Future!

Email This Entry

Posted by Derek

Here's an interesting paper that some of you may have seen in J. Med. Chem.: "Heteroaromatic Rings of the Future". That's an odd title, but an appropriate one.

For the non-chemists in the crowd who made it to this paragraph, heteroaromatic rings are a very wide class of organic compounds. They're flat cyclic structures with one or more nitrogen, oxygen, or sulfur atoms in the ring - I'll leave out explaining the concept of "aromaticity" for now, but suffice it to say that it makes them flat and gives them some other distinct properties. These structures are especially important in medicinal chemistry. If you stripped out all the drugs that contain something from this class, you'd lose a bit under half of the current pharmacopoeia, and that share has lately been increasing.

The authors have sat down and attempted to work out computationally all the possible heteroaromatic systems. If you include a carbonyl group as a component of the ring, you get 23,895 different scaffolds (and only 2986 if you leave the carbonyl out of it). Their methods to define and predict that adjective "possible" are extensive and worth reading if you're curious; they did put a lot of effort into that question, and their assumptions seem realistic to me. (For example, right off, they only considered mono- and bicyclic systems, 5- and 6-membered only, C, H, N, O and S).

At any rate, only 1701 of those 23,985 have ever been reported in the literature. And it looks as if reports of new ring systems reached a peak in the late 1970s, and have either dropped off or (at the very least) never exceeded those heights since then. The authors estimate that perhaps 3,000 of their list are synthetically feasible, with a few hundred of them being notably more likely than the rest. Their paper, in fact, seems to be a brief to alter that publication trend by explicitly pointing out unexplored synthetic territory. It wouldn't surprise me if they go back in a few years to see if they were able to cause an inflection point.

I hope they do. I'm a great believer in the idea that we medicinal chemists need all the help we can get, and if there are reasonable ring systems out there that we're not exploiting, then we should get to them. Adventurous chemists should have a look.

Comments (19) + TrackBacks (0) | Category: Chemical News | Drug Industry History | The Scientific Literature

August 26, 2009

Thalidomide for Myeloma: Whose Idea Was It?

Email This Entry

Posted by Derek

So, if you're a patient with a rare disease (or a relative of a patient with one), and you have an idea for repurposing an old drug for treatment. . .and you get a company interested, and it actually works. . .works to the point that the company takes in a billion or two dollars a year. . .what then?

Some readers will have guessed that I'm talking about thalidomide and Celgene, and right they are. Beth Jacobsen is the person involved - her husband died of multiple myeloma, but her medical sleuthing had turned up the idea of using thalidomide as a therapy for the disease, and she kept up the pressure to have the idea tried out. Celgene's mentioned her in annual reports, and she's been thanked by name in a publication on the clinical results.

But now she's suing Celgene, saying that they misappropriated her idea. Complicating the issue is the question of whether the late Judah Folkman was really the source of the inspiration, in a phone conversation with Jacobsen (earlier versions of the story have it that way, but the lawsuit apparently tells it differently). Which way did it happen? Is Jacobsen indeed owed compensation? And whether she is or not, will she be able to convince a court? Matt Herper has the story at Forbes.

I'll defer my own comments until I know a bit more about the case, but this is definitely an interesting one. I can add something that might be of relevance, though: a search in PubMed for "thalidomide myeloma" turns up 64 pages of references, almost all of them post-1999. But there is this one, from Italy in 1963. Has the idea been around for that long? Someone who can track down that journal can tell us. . .

Comments (20) + TrackBacks (0) | Category: Cancer | Drug Development | Drug Industry History | Patents and IP

August 19, 2009

Drug Companies Are Polar Bears? Maybe Not.

Email This Entry

Posted by Derek

There's an interesting article up over at InVivoBlog, and I wanted to see what the readership here thought of its main premise. Subtracting out the cute ecological analogies (Big Pharma as polar bears, for example), you get to this:

. . .For example, AstraZeneca, Novartis, and Bristol-Myers, all operate in the fields of neuroscience, oncology, and cardiovascular health. While some pharmas involve themselves in nutritionals, animal health, infectious disease, and other fields, all of these companies also engage with a mixing pot of therapeutic areas.

The relative strategic uniformity isn’t generally the case with the leading companies in other industries. In the high-tech industry, for example, there is a much higher level of specialization. Google is mainly in the advertising business; Microsoft, software; Research in Motion, in wireless solutions. You aren’t likely to see Facebook manufacturing semiconductors any time soon. (Yes we are aware of Microsoft’s Bing search engine and the new Google Chrome OS, but still.)

It is likely that health care businesses will evolve in a similar fashion. The leaders of the future will be those with unique and complex models which sub-speciate into differentiated forms. Companies will focus nearly all of their efforts on a single therapeutic area, becoming “immunology companies” or “cancer companies”. These companies will also become more integrated across sectors. A cardiology company will sell diagnostics, devices, and therapeutics pertaining to cardiovascular health.

I'm not so sure, myself. I can see reasons for this to happen, but I can also see forces that will pull in other directions. For one thing, I'm not sure if there are enough targets in some of these therapeutic areas to keep even a medium-sized company running. The host-of-smaller-companies model, each of them trying to hit it big, seems like a better fit, as long as they can share an ecosystem (there I go, too) with the larger deep-pocketed multi-area players.

Another problem is that I think the barriers to, say, a cardiovascular drug company becoming also a cardiovascular device company are higher than the ones to it becoming a cardiovascular-and-diabetes drug company. Moving into another drug discovery area at least lets you use some of your existing staff and resources, while heading out into diagnostics or devices will probably take you into territory that you don't know so well.

And besides, I think that the analogy with other industries doesn't hold up very well. The authors list off a few software and hardware companies, but don't Google and Microsoft have their hands in a lot of different areas? And have car makers (domestic or foreign) settled down into making only SUVs, only pickup trucks, or only sedans? Not that I've seen. Know of any movie studios making nothing but adventures or romantic comedies? Or any grocery chains that only sell vegetables, but not fruit?

In all those cases, the existing infrastructure lets such companies expand, at relatively lower cost, into related areas that will diversify their customer base. Medical devices and diagnostics may look like a similar situation, but I really don't think it is.

Comments (24) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

August 14, 2009

Spray-Painted For Success

Email This Entry

Posted by Derek

I do a lot of talking around here about how the general public doesn't really have a good idea of what goes on inside a drug company. But a conversation with a colleague has put me to thinking that this might be largely our own fault.

Consider the public face that our industry projects. Look at the press releases and the advertisements - what's the impression that you get? That there is a defined process for discovering drugs, for one thing, and what's more, that we are the master of it. Now, I know that we don't always send out that message. There are attempts to tell people about how many compounds have to be made, how many projects end up failing. But for the most part, we don't press-release that stuff.

No, the press releases are for the investors, and for them, we want to project that we're productive, confident, resourceful. . .in short, that we've got things under control. The last thing Wall Street wants to hear about is that you don't always know which drug targets are the right ones to work on, that you're not quite sure of the best way to prosecute them, and that (despite continuing efforts) these conditions look to obtain for quite a while to come.

And this attitude is one of the things that seeps out into the general public consciousness. That, I think, is why you get people who are convinced that we could cure a lot of these diseases, but that we just don't - you know, for all sorts of evil and profitable reasons. They've bought into our hype. If we haven't cured the common cold, that must be because we make a lot more money selling people stuff for it, not because antiviral drug development is flippin' difficult. (Especially for something like the common cold, but that's another story).

Now, to some extent, there is a defined process for discovering drugs - well, several defined processes. It's just that it doesn't work all that well, not on the absolute scale. No one could look at clinical failure rates of around 90% and say that we've got everything covered. Weirdly, that's one of the things that gives me hope for the industry, that even small improvements would make a big difference. What if only 80% of all the compounds we took into the clinic crashed and burned? That would be great! It would double our success rate!

But when I mention that 90% problem to people outside the drug industry, they usually have no idea. All they hear about are the successes. Perhaps it would do us some good to mention the failures once in a while?

Comments (29) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Why Everyone Loves Us

August 10, 2009

Pharma's Return on Investment: Yikes

Email This Entry

Posted by Derek

There's a recent article in Nature Reviews Drug Discovery that has some alarming figures in it. This is yet another look at the industry from McKinsey, and we'll get to their McKinseyish solutions in a moment. But first, some numbers:

They calculate that the return on investment (ROI) from small-molecule drug research was nearly 12% during the late 1990s, but since 2001 it's been more like 7.5%. If true, that's not a very nice number at all, because their data indicate that most companies assume a capitalization rate of between 8.5 and 11% - in other words, internal industry estimates of what it costs to develop a drug over time now run higher, on average, than the actual returns from developing one.

Another alarming bit of news is their analysis of Phase III failures. From 1990 to 2007 there were 106 of those nasty, expensive events. But the McKinsey figures are that 45% of those failures were due to insufficient efficacy versus placebo - which, in theory, is the sort of thing you're supposed to be rather more sure about by that point, what with having run Phase II trials for efficacy and all. (I'd like to know how many Phase III trials succeeded over that time period as well - what's the overall percentage of failure at that point?) Another 24% of the failures were due to insufficient efficacy versus the standard of care, which is at least a bit more understandable. But together, nearly 70% of all Phase III failures aren't due to tox, they're because the drugs just didn't work as well as their developers thought.

Back to those ROI figures, though. Either those numbers are wrong, or we're in quite a fix. (Of course, since the authors are consultants, their viewpoint is likely that those numbers are the best available, that all of us are indeed in a fix, and that if we pay them money they'll help us out of it). The paper does have some recommendations, to wit:

1. Cut costs, but not the obvious stuff that companies have been doing. Instead, they suggest broader strategies such as considering whether a company's clinical trials are consistently over-powered, and to not do quite as much "planning for success", since most development programs fail. That is, don't automatically gear up for a full overlapping development workup for every compound in the pipeline, but consider staging things so you won't waste as much effort if (or when) they crash out. And naturally, they also suggest outsourcing whatever "non-core" functions there are available.

2. Work faster. I have to say, though, that if I got paid every time I heard this one, I wouldn't have to work. The authors point out, correctly, that delays in getting a compound to market are indeed hideously costly, but on-the-other-hand it by saying that "Of course, gains in speed cannot come from short cuts: the key to capturing value from programme acceleration is choosing the right programmes to accelerate". And that leads into their third category, which is. . .

3. Make better decisions. This isn't quite a much of an eye-roller as it might seem, because this is where they bring in those Phase III numbers above. Such failures suggest some deeper problems:

"In our experience, many organizations still advance compounds for the wrong reasons: because of momentum, 'numbers-focused' incentive systems or through waiting too long to have tough conversations about the required level of product differentiation."

And I have to say, they have a point. People who've been in the industry for some years will have seen all of those mistakes made. for sure. But figuring how to stop those things from happening is the tough part, and presumably that's one of the things that McKinsey is selling.

Comments (45) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

July 31, 2009

Where Drugs Come From, and How. Once More, With A Roll of the Eyes

Email This Entry

Posted by Derek

I linked yesterday to a post by Megan McArdle about health care reform. And while I realize that everyone got into a shouting match in the comments to my own post on the subject - and people sure did in the comments to hers; it's endemic - I wanted to quote a section from her on drug discovery:

Advocates of this policy have a number of rejoinders to this, notably that NIH funding is responsible for a lot of innovation. This is true, but theoretical innovation is not the same thing as product innovation. We tend to think of innovation as a matter of a mad scientist somewhere making a Brilliant Discovery!!! but in fact, innovation is more often a matter of small steps towards perfection. Wal-Mart’s revolution in supply chain management has been one of the most powerful factors influencing American productivity in recent decades. Yes, it was enabled by the computer revolution–but computers, by themselves, did not give Wal-Mart the idea of treating trucks like mobile warehouses, much less the expertise to do it.

In the case of pharma, what an NIH or academic researcher does is very, very different from what a pharma researcher does. They are no more interchangeable than theoretical physicists and civil engineers. An academic identifies targets. A pharma researcher finds out whether those targets can be activated with a molecule. Then he finds out whether that molecule can be made to reach the target. Is it small enough to be orally dosed? (Unless the disease you’re after is fairly fatal, inability to orally dose is pretty much a drug-killer). Can it be made reliably? Can it be made cost-effectively? Can you scale production? It’s not a viable drug if it takes one guy three weeks with a bunsen burner to knock out 3 doses.

I don't think a lot of readers here will have a problem with that description, because it seems pretty accurate. True, we do a lot more inhibiting drug targets than we do activating them, because it's easier to toss a spanner in the works, but that's mostly just a matter of definitions. And this does pass by the people doing some drug discovery work in academia (and the people doing more blue-sky stuff in industry), but overall, it's basically how things are, plus or minus a good ol' Bunsen burner or two.

But not everyone's buying it. Take this response by Ben Domenech over at The New Ledger. We'd better hope that this isn't a representative view, and that the people who are trying to overhaul all of health care as quickly as possible have a better handle on how our end of the system works:

. . .But needless to say, this passage and the ones following it surprised me a great deal. Working at the Department of Health and Human Services provided me the opportunity to learn a good deal about the workings of the NIH, and I happen to have multiple friends who still work there — and their shocked reaction to McArdle’s description was stronger than mine, to say the least.

“McArdle clearly doesn’t understand what she’s writing about,” one former NIH colleague said today. “Where does she think Nobel prize winners in biomedical research originate, academic researchers or in Pharma? Our academic researchers run clinical trials and develop drugs. I’m not trying to talk down Pharma, which I’m a big fan of, but I don’t think anyone in the field could read what she wrote without laughing.”

Well, I certainly could make it through without a chuckle, and I'll have been doing drug discovery for twenty years this fall. So how does the guy from HHS think things go over here?

To understand how research is divided overall, consider it as three tranches: basic, translational, and clinical. Basic is research at the molecular level to understand how things work; translational research takes basic findings and tries to find applications for those findings in a clinical setting; and clinical research takes the translational findings and produces procedures, drugs, and equipment for use by and on patients. . .

. . .The truth, as anyone knowledgeable within the system will tell you, is that private companies just don’t do basic research. They do productization research, and only for well-known medical conditions that have a lot of commercial value to solve. The government funds nearly everything else, whether it’s done by government scientists or by academic scientists whose work is funded overwhelmingly by government grants.

Hmm. Well-known with a lot of commercial value. Now it's true that we tend to go after things with commercial value - it is a business, after all - but how well-known is Gaucher disease? Or Fabry disease? Mucopolysaccharidosis I? People who actually know something about the drug industry will be nodding their heads, though, because they'll have caught on that I'm listing off Genzyme's product portfolio (part of it, anyway), which is largely made up of treatments for such things. There ar many other examples. Believe me, if we can make money going after a disease, we'll give it a try, and there are a lot of diseases. (The biggest breakdown occurs not when a disease affects a smaller number of people, but when almost no one who has it can possibly pay for the cost of developing the treatment, as in many tropical diseases).

But even taking Domenech's three research divisions as given - and they're not bad - don't we in industry even get to do a little bit of translational research? Even sometimes some basic stuff? After all, in the great majority times when we start attacking some new target, there is no drug for it, you know. We have to express the protein in an active form, work up a reliable assay using it, screen our compound collections looking for a lead structure, then work on it for a few years to make new compounds that are potent, selective, nontoxic, practical to produce, and capable of being dosed in humans. (Oh, and they really should be chemical structures that no one's ever made or even speculated about before). All of that is "productization" research? Even when we're the first people to actually take a given target idea into the clinic at all?

That happens all the time, you know. The first project I ever worked on in this industry was a selective dopamine antagonist targeted for schizophrenia. We were the first company to take this particular subtype into the clinic, and boy, did we bomb big. No activity at all. It was almost as if we'd discovered something basic about schizophrenia, but apparently that can't be the case. Then I worked on Alzheimer's therapies, namely protease inhibitors targeting beta-amyloid production, and if I'm not mistaken, the only real human data on such things has come from industry. I could go on, and I will, given half a chance. But I hope that the point has been made. If it hasn't, then consider this quote, from here:

“. . .translational research requires skills and a culture that universities typically lack, says Victoria Hale, chief executive of the non-profit drug company the Institute for OneWorld Health in San Francisco, California, which is developing drugs for visceral leishmaniasis, malaria and Chagas' disease. Academic institutions are often naive about what it takes to develop a drug, she says, and much basic research is therefore unusable. That's because few universities are willing to support the medicinal chemistry research needed to verify from the outset that a compound will not be a dead end in terms of drug development."

The persistent confusion over what's done in industry and what's done in academia has been one of my biggest lessons from running this blog. The topic just will not die. A few years ago, I ended up writing a long post on what exactly drug companies do in response to the "NIH discovers all the drugs" crowd, with several follow-ups (here, here, and here). But overall, Hercules had an easier time with the Hydra.

Now, there is drug discovery in academia (ask Dennis Liotta!), although not enough of it to run an industry. Lyrica is an example of a compound that came right out of the university labs, although it certainly had an interesting road to the market. And the topic of academic drug research has come up around here many times over the last few years. So I don't want to act as if there's no contribution at all past basic research in academia, because that's not true at all. But neither is it the case that pharma just swoops in, picks up the wonder drugs, and decides what color the package should be.

But what really burns my toast is this part:

So Pharma is interested in making money as their primary goal — that should surprise no one. But they’re also interested in avoiding litigation. Suppose for a moment that Pharma produces a drug to treat one non-life threatening condition, and it’s a monetary success, earning profits measured in billions of dollars. But then one of their researchers discovers it might have other applications, including life-saving ones. Instead of starting on research, Pharma will stand pat. Why? Because it doesn’t make any business sense to go through an entire FDA approval process and a round of clinical trials all over again, and at the end of the day, they could just be needlessly jeopardizing the success of a multi-billion dollar drug. It makes business sense to just stand with what works perfectly fine for the larger population, not try to cure a more focused and more deadly condition.

Ummm. . .isn't this exactly what happened with Vioxx? Merck was trying to see if Cox-2 inhibitors could be useful for colon cancer, which is certainly deadly, and certainly a lot less common than joint and muscle pains. Why didn't Merck "stand pat"? Because they wanted to make even more money of course. They'd already spent some of the cash that would have to have been spent on developing Vioxx, and cancer trials aren't as long and costly as they are in some other therapeutic areas. So it was actually a reasonable thing to look into. If you're staying in the same dosing range, you're not likely to turn up tox problems that you didn't already see in your earlier trials. (That's where Merck got into real trouble, actually - the accusation was that they'd seen signs of Vioxx's cardiovascular problems before the colon cancer trial, but breezed past them). But you just might come up with a benefit that allows you to sell your drug to a whole new market.

And that might also explain why, in general, drug companies look for new therapeutic opportunities like this all the time with their existing drugs. In fact, sometimes we look for them so aggressively that we get nailed for off-label promotion. No, instead of standing pat, we get in trouble for just the opposite. Your patented drug is a wasting asset, remember, and your job is to make the absolute most of it while it's still yours. Closing your eyes to new opportunities is not the way to do that.

The thing is, Domenech's heart seems to be mostly in the right place. He just doesn't understand the drug industry, and neither do his NIH sources. Talking to someone who works in it would have helped a bit.

Comments (35) + TrackBacks (0) | Category: Academia (vs. Industry) | Business and Markets | Drug Industry History

July 20, 2009

Amyloid in Trouble

Email This Entry

Posted by Derek

Here's an interesting look at the current state of the Alzheimer's field from Bloomberg. The current big hope is Wyeth (and Elan)'s bapineuzumab, which I last wrote about here. That was after the companies reported what had to be considered less-than-hoped-for efficacy in the clinic. The current trial is the one sorted out by APOE4 status of the patients. After the earlier trial data, it seems unlikely that there's going to be a robust effect across the board - the people with the APOE4 mutation are probably the best hope for seeing real efficacy.

And if bapineuzumab doesn't turn out to work even for them? Well:

“Everyone is waiting with bated breath on bapineuzumab,” said Michael Gold, London-based Glaxo’s vice president of neurosciences, in an interview. “If that one fails, then everyone will say we have to rethink the amyloid hypothesis.”

Now that will be a painful process, but it's one that may well already have begun. beta-Amyloid has been the front-runner for. . .well, for decades now, to be honest. And it's been a target for drug companies since around the late 1980s/early 1990s, as it became clear that it was produced by proteolytic cleavage from a larger precursor protein. A vast amount of time, effort, and money have gone into trying to find something that will interrupt that process, and it's going to be rather hard to take if we find out that we've been chasing a symptom of Alzheimer's rather than a cause.

But there's really no other way to find such things out. Human beings are the only animals that really seem to get Alzheimer's, and that's made it a ferocious therapeutic area to work in. The amyloid hypothesis will die hard if die it does.

Comments (21) + TrackBacks (0) | Category: Alzheimer's Disease | Clinical Trials | Drug Industry History | The Central Nervous System

July 17, 2009

Drug Approvals, Natural And Unnatural

Email This Entry

Posted by Derek

I seem to have been putting a lot of graphics up this week, so here's another one. This is borrowed from a recent Science paper on the future of natural-products based drug discovery. It's interesting both from that viewpoint, and because of the general approval numbers:
Nat%20Prod%20drugs%20and%20approvals%20graph.jpg
And there you have it. Outside of anomalies like 2005, we can say, I think, that the 1980s were a comparative Golden Age of Drug Approvals, that the 1990s held their own but did not reach the earlier heights, and that since 2000 the trend has been dire. If you want some numbers to confirm your intuitions, you can just refer back to this.

As far as natural products go, from what I can see, the percentage of drugs derived from them has remained roughly constant: about half. Looking at the current clinical trial environment, though, the authors see this as likely to decline, and wonder if this is justified or not. They blame two broad factors, one of them being the prevailing drug discovery culture:

The double-digit yearly sales growth that drug companies typically enjoyed until about 10 years ago has led to unrealistically high expectations by their shareholders and great pressure to produce "blockbuster drugs" with more than $1 billion in annual sales (3). In the blockbuster model, a few drugs make the bulk of the profit. For example, eight products accounted for 58% of Pfizer’s annual worldwide sales of $44 billion in 2007.

As an aside, I understand the problems with swinging for the fences all the time, but I don't see the Pfizer situation above as anything anomalous. That's a power-law distribution, and sales figures are exactly where you'd expect to see such a thing. A large drug company with its revenues evenly divided out among a group of compounds would be the exception, wouldn't it?

The other factor that they say has been holding things back is the difficulty of screening and working with many natural products, especially now that we've found many of the obvious candidates. A lot of hits from cultures and extracts are due to compounds that you already know about. The authors suggest that new screening approaches could get around this problem, as well as extending the hunt to organisms that don't respond well to traditional culture techniques.

None of these sound like they're going to fix things in the near term, but I don't think that the industry as a whole has any near-term fixes. But since the same techniques used to isolate and work with tricky natural product structures will be able to help out in other areas, too, I wish the people working on them luck.

Comments (10) + TrackBacks (0) | Category: Business and Markets | Drug Assays | Drug Development | Drug Industry History

July 15, 2009

Why Does Screening Work At All? (Free Business Proposal Included!)

Email This Entry

Posted by Derek

I've been meaning to get around to a very interesting paper from the Shoichet group that came out a month or so ago in Nature Chemical Biology. Today's the day! It examines the content of screening libraries and compares them to what natural products generally look like, and they turn up some surprising things along the way. The main question they're trying to answer is: given the huge numbers of possible compounds, and the relatively tiny fraction of those we can screen, why does high-throughput screening even work at all?

The first data set they consider is the Generated Database (GDB), a calculated set of all the reasonable structures with 11 or fewer nonhydrogen atoms, which grew out of this work. Neglecting stereochemistry, that gives you between 26 and 27 million compounds. Once you're past the assumptions of the enumeration (which certainly seem defensible - no multiheteroatom single-bond chains, no gem-diols, no acid chlorides, etc.), then there are no human bias involved: that's the list.

The second list is everything from the Dictionary of Natural Products and all the metabolites and natural products from the Kyoto Encyclopedia of Genes and Genomes. That gives you 140,000+ compounds. And the final list is the ZINC database of over 9 million commercially available compounds, which (as they point out) is a pretty good proxy for a lot of screening collections as well.

One rather disturbing statistic comes out early when you start looking at overlaps between these data sets. For example, how many of the possible GDB structures are commercially available? The answer: 25,810 of them - in other words, you can only buy fewer than 0.01% of the possible compounds with 11 heavy atoms or below, making the "purchasable GDB" a paltry list indeed.

Now, what happens when you compare that list of natural products to these other data sets? Well, for one thing, the purchasable part of the GDB turns out to be much more similar to the natural product list than the full set. Everything in the GDB has at least 20% Tanimoto similarity to at least one compound in the natural products set, not that 20% means much of anything in that scoring system. But only 1% of the GDB has a 40% Tanimoto similarity, and less than 0.005% has an 80% Tanimoto similarity. That's a pretty steep dropoff!

But the "purchasable GDB" holds up much better. 10% of that list has 100% Tanimoto similarity (that is, 10% of the purchasable compounds are natural products themselves). The authors also compare individual commercial screening collections. If you're interested, ChemBridge and Asinex are the least natural-product-rich (about 5% of their collections), whereas IBS and Otava are the most (about 10%).

So one answer to "why does HTS ever work for anything" is that compound collections seem to be biased toward natural-product type structures, which we can reasonably assume have generally evolved to have some sort of biological activity. It would be most interesting to see the results of such an analysis run from inside several drug companies against their own compound collections. My guess is that the natural product similarities would be even higher than the "purchasable GDB" set's, because drug company collections have been deliberately stocked with structural series that have shown activity in one project or another.

That's certainly looking at things from a different perspective, because you can also hear a lot of talk about how our compound files are too ugly - too flat, too hydrophobic, not natural-product-like enough. These viewpoints aren't contradictory, though - if Shoichet is right, then improving those similarities would indeed lead to higher hit rates. Compared to everything else, we're already at the top of the similarity list, but in absolute terms there's still a lot of room for improvement.

So how would one go about changing this, assuming that one buys into this set of assumptions? The authors have searched through the various databases for ring structures, taking those as a good proxy for structural scaffolds. As it turns out 83% of the ring scaffolds among the natural products are unrepresented among the commercially available molecules - a result that I assume that Asinex, ChemBridge, Life Chemicals, Otava, Bionet and their ilk are noting with great interest. In fact, the authors go even further in pointing out opportunities, with a table of rings from this group that closely resemble known drug-like ring systems.

But wait a minute. . .when you look at those scaffolds, a number of them turn out to be rather, well, homely. I'd be worried about elimination to form a Michael acceptor in compound 19, for example. I'm not crazy about the N,S acetal in 21 or the overall stability of the acetals in 15, 17 and 31. The propiolactone in 23 is surely reactive, as is the quinone in 25, and I'd be very surprised if that's not what they owe their biological activities to. And so on.
Shoichet%20scaffolds.jpg
All that said, there are still some structures in there that I'd be willing to check out, and there must be more of them in that 83%. No doubt a number of the rings that do sneak into the commercial list are not very well elaborated, either. I think that there is a real commercial opportunity here. A company could do quite well for itself by promoting its compound collection as being more natural-product similar than the competition, with tractable molecules, and a huge number of them unrepresented in any other catalog.

Now all you'd have to do is make these things. . .which would require hiring synthetic organic chemists, and plenty of them. These things aren't easy to make, or to work with. And as it so happens, there are quite a few good ones available these days. Anyone want to take this business model to heart?

Comments (13) + TrackBacks (0) | Category: Drug Assays | Drug Industry History | In Silico

July 8, 2009

How Much Does the Drug Industry Spend on Marketing?

Email This Entry

Posted by Derek

Anyone who defends the pharmaceutical industry has to be ready to hear, over and over and over, about how much it spends on sales and marketing versus R&D. This is thought to be a telling point about where the priorities really are. I've addressed this one several times, and my best response is to point out that sales and marketing are actually supposed to bring in more money than you spend on them, and do so more reliably than R&D in the short term.

There's now a very useful paper in Nature Reviews Drug Discovery looking at just this issue. The authors (from three universities in the US and Israel) are looking into the general question of which is the better use of money: put it into R&D for the long term, or promote existing products for the short term? I should make clear at the outset that those two options do line up in that way. R&D expenditures take years to pay off, if ever, given the amount of time that drug development takes. And marketing of a current product had better start paying off in a shorter time frame, because every patented drug is a wasting asset, constantly being eaten into by competition and by its time to patent expiration.

So which makes more financial sense? The authors numbers from the Wharton databases on publicly traded drug companies, looking at those with more than $50 million in sales. Using the company stock prices as a measure of value (J. Finance LVI(6), 2431–2456 (2001), I'm giving you references here), they found, in general, that R&D investments have a net positive effect, while increased promotion has a negative effect. (See also Rev. Account Stud. 7, 355–382 (2002), another journal I don't reference much). Both effects are larger for smaller companies, as you might expect, but they held up across the industry. The effect also holds up if you factor out the compensation packages of the top five executives of each company (which is a nice control to run, I have to say). And yes, since you ask, there is a negative effect on stock price that correlates to higher executive compensation, and I'm willing to bet that this effect holds for more than just the drug industry.

Since we're talking about stock prices, which are generally forward-looking, the way to interpret these results is probably that investors expect R&D expenditures to pay off in the long term, but actually expect sales and marketing expenditures to reduce long-term value. If that's so, then why spend money on marketing? The reason the authors propose is just what I'd been talking about: short-term reliability. Drug discovery and development is inherently risky, and promotion of existing products is (at least comparatively) more of a sure thing. Companies engage in a mix of the two to try to even the cash flow out. (And as the authors note, if executive compensation is tied more to short-term performance, then there's an incentive to go with the short-term gains).
NRDD%20graph.jpg
In general, though, you'd figure that companies should invest more in R&D. And here's the real kicker: that's exactly what's been happening. As this graph from the paper shows, over the last thirty years expenditures in the Sales, General, and Administrative area have risen only slightly as a per cent of sales. The Cost of Goods Sold category (materials, physical plant, manufacturing facilities, etc.) has gone proportionally down, with an interesting excursion in the mid-1990s. (Note also that this used to be the leading category). And R&D expenditures (again, as a per cent of sales) rose in the 1980s, were flat in the 1990s, and have risen since then. Overall, since 1975, the proportion of money spent on R&D has more than tripled, from 5% to 17%.

This, I hardly need point out, does not fit the narrative of some of the e-mails and comments I get. Some perceptions of the drug industry have us, Back In the Old Days, as spending our money on R&D, only to slimily slide into becoming pure marketing businesses as time has passed, with our recent years being especially disgusting and rapacious. According to these figures, this is at the very least not accurate, and comes close to being the opposite of the truth. Comments are welcome - most welcome, indeed.

Comments (58) + TrackBacks (0) | Category: Business and Markets | Drug Development | Drug Industry History

July 2, 2009

Jargon Will Save Us All

Email This Entry

Posted by Derek

Moore's Law: number of semiconductors on a chip doubling every 18 months or so, etc. Everyone's heard of it. But can we agree that anyone who uses it as a metaphor or perscription for drug research doesn't know what they're talking about?

I first came across the comparison back during the genomics frenzy. One company that had bought into the craze in a big way press-released (after a rather interval) that they'd advanced their first compound to the clinic based on this wonderful genomics information. I remember rolling my eyes and thinking "Oh, yeah", but on a hunch I went to the Yahoo! stock message boards (often a teeming heap of crazy, then as now). And there I found people just levitating with delight at this news. "This is Moore's Law as applied to drug discovery!" shouted one enthusiast. "Do you people realize what this means?" What it meant, apparently, was not only that this announcement had come rather quickly. It also meant that this genomics stuff was going to discover twice as many drugs as this real soon. And real soon after that, twice as many more, and so on until the guy posting the comment was as rich as Warren Buffet, because he was a visionary who'd been smart enough to load himself into the catapult and help cut the rope. (For those who don't know how that story ended, the answer is Not Well: the stock that occasioned all this hyperventilation ended up dropping by a factor of nearly a hundred over the next couple of years. The press-released clinical candidate was never, ever, heard of again).

I bring this up because a reader in the industry forwarded me this column from Bio-IT World, entitled, yes, "Only Moore's Law Can Save Big Pharma". I've read it three times now, and I still have only the vaguest idea of what it's talking about. Let's see if any of you can do better.

The author starts off by talking about the pressures that the drug industry is under, and I have no problem with him there. That is, until he gets to the scientific pressures, which he sketches out thusly:

Scientifically, the classic drug discovery paradigm has reached the end of its long road. Penicillin, stumbled on by accident, was a bona fide magic bullet. The industry has since been organized to conduct programs of discovery, not design. The most that can be said for modern pharmaceutical research, with its hundreds of thousands of candidate molecules being shoveled through high-throughput screening, is that it is an organized accident. This approach is perhaps best characterized by the Chief Scientific Officer of a prominent biotech company who recently said, "Drug discovery is all about passion and faith. It has nothing to do with analytics."

The problem with faith-based drug discovery is that the low hanging fruit has already been plucked, driving would be discoverers further afield. Searching for the next miracle drug in some witch doctor's jungle brew is not science. It's desperation.

The only way to escape this downward spiral is new science. Fortunately, the fuzzy outlines of a revolution are just emerging. For lack of a better word, call it Digital Chemistry.

And when the man says "fuzzy outline", well, you'd better take him at his word. What, I know you're all asking, is this Digital Chemistry stuff? Here, wade into this:

Tomorrow's drug companies will build rationally engineered multi-component molecular machines, not small molecule drugs isolated from tree bark or bread mold. These molecular machines will be assembled from discrete interchangeable modules designed using hierarchical simulation tools that resemble the tool chains used to build complex integrated circuits from simple nanoscale components. Guess-and-check wet chemistry can't scale. Hit or miss discovery lacks cross-product synergy. Digital Chemistry will change that.

Honestly, if I start talking like this, I hope that onlookers will forgo taking notes and catch on quickly enough to call the ambulance. I know that I'm quoting too much, but I have to tell you more about how all this is going to work:

But modeling protein-protein interaction is computationally intractable, you say? True. But the kinetic behavior of the component molecules that will one day constitute the expanding design library for Digital Chemistry will be synthetically constrained. This will allow engineers to deliver ever more complex functional behavior as the drugs and the tools used to design them co-evolve. How will drugs of the future function? Intracellular microtherapeutic action will be triggered if and only if precisely targeted DNA or RNA pathologies are detected within individual sick cells. Normal cells will be unaffected. Corrective action shutting down only malfunctioning cells will have the potential of delivering 99% cure rates. Some therapies will be broad based and others will be personalized, programmed using DNA from the patient's own tumor that has been extracted, sequenced, and used to configure "target codes" that can be custom loaded into the detection module of these molecular machines.
.

Look, I know where this is coming from. And I freely admit that I hope that, eventually, a really detailed molecular-level knowledge of disease pathology, coupled with a really robust nanotechnology, will allow us to treat disease in ways that we can't even approach now. Speed the day! But the day is not sped by acting as if this is the short-term solution for the ills of the drug industry, or by talking as if we already have any idea at all about how to go about these things. We don't.

And what does that paragraph up there mean? "The kinetic behavior. . .will be synthetically constrained"? Honestly, I should be qualified to make sense of that, but I can't. And how do we go from protein-protein interactions at the beginning of all that to DNA and RNA pathologies at the end, anyway? If all the genomics business has taught us anything, it's that these are two very, very different worlds - both important, but separated by a rather wide zone of very lightly-filled-in knowledge.

Let's take this step by step; there's no other way. In the future, according to this piece, we will detect pathologies by detecting cell-by-cell variations in DNA and/or RNA. How will we do that? At present, you have to rip open cells and kill them to sequence their nucleic acids, and the sensitivities are not good enough to do it one cell at a time. So we're going to find some way to do that in a specific non-lethal way, either from the outside of the cells (by a technology that we cannot even yet envision) or by getting inside them (by a technology that we cannot even envision) and reading off their sequences in situ (by a technology that we cannot even envision). Moreover, we're going to do that not only with the permanent DNA, but with the various transiently expressed RNA species, which are localized to all sort of different cell compartments, present in minute amounts and often for short periods of time, and handled in ways that we're only beginning to grasp and for purposes that are not at all yet clear. Right.

Then. . .then we're going to take "corrective action". By this I presume that we're either going to selectively kill those cells or alter them through gene therapy. I should note that gene therapy, though incredibly promising as ever, is something that so far we have been unable, in most cases, to get to work. Never mind. We're going to do this cell by cell, selectively picking out just the ones we want out of the trillions of possibilities in the living organism, using technologies that, I cannot emphasize enough, we do not yet have. We do not yet know how to find most individual cells types in a complex living tissue; huge arguments ensue about whether certain rare types (such as stem cells) are present at all. We cannot find and pick out, for example, every precancerous cell in a given volume of tissue, not even by slicing pieces out of it, taking it out into the lab, and using all the modern techniques of instrumental analysis and molecular biology.

What will we use to do any of this inside the living organism? What will such things be made of? How will you dose them, whatever they are? Will they be taken up though the gut? Doesn't seem likely, given the size and complexity we're talking about. So, intravenous then, fine - how will they distribute through the body? Everything spreads out a bit differently, you know. How do you keep them from sticking to all kinds of proteins and surfaces that you're not interested in? How long will they last in vivo? How will you keep them from being cleared out by the liver, or from setting off a potentially deadly immune response? All of these could vary from patient to patient, just to make things more interesting. How will we get any of these things into cells, when we only roughly understand the dozens of different transport mechanisms involved? And how will we keep the cells from pumping them right back out? They do that, you know. And when it's time to kill the cells, how do you make absolutely sure that you're only killing the ones you want? And when it's time to do the gene therapy, what's the energy source for all the chemistry involved, as we cut out some sequences and splice in the others? Are we absolutely sure that we're only doing that in just the right places in just the right cells, or will we (disastrously) be sticking in copies into the DNA of a quarter of a per cent of all the others?

And what does all this nucleic acid focus have to do with protein expression and processing? You can't fix a lot of things at the DNA level. Misfolding, misglycosylation, defects in transport and removal - a lot of this stuff is post-genomic. Are we going to be able to sequence proteins in vivo, cell by cell, as well? Detect tertiary structure problems? How? And fix them, how?

Alright, you get the idea. The thing is, and this may be surprising considering those last few paragraphs, that I don't consider all of this to be intrinsically impossible. Many people who beat up on nanotechnology would disagree, but I think that some of these things are, at least in broad hazy theory, possibly doable. But they will require technologies that we are nowhere close to owning. Babbling, as the Bio-IT World piece does, about "detection modules" and "target codes" and "corrective action" is absolutely no help at all. Every one of those phrases unpacks into a gigantic tangle of incredibly complex details and total unknowns. I'm not ready to rule some of this stuff out. But I'm not ready to rule it in just by waving my hands.

Comments (46) + TrackBacks (0) | Category: Drug Industry History | General Scientific News | In Silico | Press Coverage

June 24, 2009

GSK's Getting Better. Just Ask the CEO.

Email This Entry

Posted by Derek

There are some interesting statements from GlaxoSmithKline CEO Andrew Witty here at Reuters. He admits that morale was completely in the scupper around the place a few months ago, which certainly seems to be true, but says that they're turning things around. To that point, remember all that stuff a few years ago about how GSK's research structure exemplified pretty much everything that a drug company needed to have? Well. . .

"We've really thrown into reverse much of the trend of research organisation that had developed over the last 15 years," Witty said.

Over that time, the drugs industry was a big commercial success but it took a "wrong turn" by deciding that drug discovery was an industrial process based on large-scale application of technologies like genomics, proteomics and combinatorial chemistry.

"These were all supposed to transform productivity yet none of them did. It turns out, in my view, that research is much more of an art than a science," Witty said.

Several thoughts come to mind. First off, I take the point about art versus science, but it's hard to do art on an industrial scale. That, to my mind, has been one of the major problems in all of drug R&D. He's right that the industry keeps seizing on things that promise to take some of the craziness out of the process - but it's not like the temptation isn't still around. We just haven't seen the latest brainwave yet.

But still, over time, some of the random element has decreased. We actually do understand a lot of things better than we used to. We know to look for hERG, to pick one example, and there are others. But these things don't (yet) add to enough of a transformation. Adding more and more knowledge to the pile has to help - I'm certainly not enough of a nihilist to deny that - but it's fair to say that it hasn't helped as quickly and as thoroughly as we might have hoped.

You can find opinions all up and down that spectrum: at one end are the nihilists themselves, who hold that the problems we're trying to solve are (at present) too hard, and what's more, they're likely to remain too hard for the foreseeable future, so you'd better get lucky - and design your research structure to improve your chances of doing that. Moving up from there, you have a lot of people in the middle who see incremental progress, but (with Goethe) worry that "Where there is much light, there is much darkness". Every new advance untangles a few things, but also ends up illustrating how much more we need to know. Opinions in this crowd vary, from pessimists who come close to the first nihilist group, all the way up to optimists who hold out hope that things will start making more sense soon. And past them, you come to the super-optimists, the Kurzweilians who are waiting for the Singularity.

But finally, reading the Witty article, I can't help but imagine an interview in around 2020, with whoever's in charge then talking about how they had to get rid of all that musty old research structure that the previous management team had put in. . .

Comments (19) + TrackBacks (0) | Category: Business and Markets | Drug Industry History

June 17, 2009

The View From Pfizer's Corner Offices

Email This Entry

Posted by Derek

There's a good article from Lee Howard up at The Day (the New London/Groton newspaper) on the changes going on at Pfizer. It's the story according to management, though, which is worth having for its compare-and-contrast uses:

Despite the looming uncertainty, according to company spokesmen, the new research structure has added energy and urgency to the drug-discovery process in Groton. . .

. . .The changes in Groton - seen most plainly in displays of logos the new business units are in the process of choosing - have added drug-development staff and even legal experts to the R&D mix, along with biologists and chemists who typically have worked in close proximity. In the middle of it all sits the chief scientific officer of each business unit, as well as other managers.

The idea is to develop a more realistic idea of a drug's likelihood to succeed at an early stage and then bring it to market quicker if it seems to be working.

I hope that the process of choosing new logos doesn't take too long. You could get a reasonable read on the success of any attempt to remake Pfizer's culture by counting the number of meetings the logo process has required so far.

But I can't make fun of the goals that the company is setting - they're perfectly sensible. The only problem is that they're just what everyone else is trying to do, too, and if it were easy, everyone would be finished doing them by now. The problem with try