Note: a follow-up post to this one can be found here.

I’ve had a deluge of emails asking me about this article from Slate on the costs of drug research. It’s based on this recent publication from Donald Light and Rebecca Warburton in the London School of Economics journal Biosocieties, and it’s well worth discussing.

But let’s get a few things out of the way first. The paper is a case for the prosecution, not a dispassionate analysis. The authors have a great deal of contempt for the pharmaceutical industry, and are unwilling (or unable) to keep it from seeping into their prose. I’m tempted to reply in kind, but I’m supposed to be the scientist in this discussion. We’ll see how well I manage.

Another thing to mention immediately is that this paper is, in fact, not at all worthless. In between the editorializing, they make some serious points, and most of these are about the 2003 Tufts (diMasi) estimate of drug development costs. This is the widely-cited $802 million figure, and the fact that it’s widely cited is what seems to infuriate the authors of this paper the most.

Here are their problems with it: the Tufts study surveyed 24 large drug companies, of which 10 agreed to participate. (In other words, this is neither a random nor a comprehensive sample). The drugs used for the study numbers were supposed to be “self-originated”, but since we don’t know which drugs they were, it’s impossible to check this. And since the companies reported their own numbers, these would be difficult to check, even if they were made available drug-by-drug (which they aren’t). Nor can anyone be sure that variations in how companies assign costs to R&D; haven’t skewed the data as well. We may well be looking at the most expensive drugs of the whole sample; it’s impossible to say.

All of these are legitimate objections – the Tufts numbers are just not transparent. Companies are not willing to completely spread their books out for outside observers, in any industry, so any of these estimates are going to be fuzzy. Light and Warburton go on to some accounting issues, specifically the cost-of-capital estimate that took their estimated cost for a new drug from 400 million to 800 million. That topic has been debated around this blog before, and it’s important to break that argument into two parts.

The first one is whether it’s appropriate to consider opportunity costs at all. I still say that it is, and I don’t have much patience for the “argument from unfamiliarity”. If you commit to some multi-year use of your money, you really are forgoing what you could have earned with it otherwise. You’re giving it up – it’s a cost, whether you’re used to thinking of it that way or not. But the second part of the argument is, just how much could you have earned? The problem here is that the Tufts study assumes 11% returns, which is just not anywhere near realistic. Mind you, it’s on the same order of fantasy as the returns that have been assumed in the past inside many pension plans, but we’re going to be dealing with that problem for years to come, too. No, the Tufts opportunity cost numbers are just too high.

Then there’s the tax situation. I am, I’m very happy to say, no expert on R&D; tax accounting. But it’s enough to say that there’s arguing room about the effects of the various special tax provisions for expenditures in this area. And it’s complicated greatly by different treatment in different part of the US and the world. The Tufts study does not reduce the gross costs of R&D; by tax savings, while Light and Warburton argue otherwise. Among other points, they argue that the industry is trying to have it both ways – that cost-of-capital arguments make R&D; expenditures look like a long-term investment, while for tax purposes, many of these are deductible each year as more of an ordinary business expense.

Fine, then – I’m in agreement, on general principles, with Light and Warburton when they say that the Tufts study estimates are hard to check and likely too high. But here’s where we part company. Not content to make this point, the authors turn around and attempt to replace one shaky number with another. The latter part of their paper, to me, is one one attempt after another to push their own estimate of drug R&D; costs into a world of fantasy. Their claim is that the median R&D; cost for a new drug is about $43 million. This figure is wrong.

For example, they have total clinical trial and regulatory review time dropping (taken from this reference – note that Light and diMasi, lead author of the Tufts study, are already fighting it out in the letter section). But if that’s true why isn’t the total time from discovery to approval going down? I’ve been unable to find any evidence that it is, and my own experience certainly doesn’t make me think that the process is going any faster.

The authors also claim that corporate R&D; risks are much lower than reported. Here they indulge in some rhetoric that makes me wonder if they understand the process at all:

Reports by industry routinely claim that companies must test 5000-10000 compounds to discover one drug that eventually comes to market. Marcia Angell (2004) points out that these figures are mythic: they could say 20,000 and it would not matter much, because the initial high-speed computer screenings consume a small per cent of R&D; costs. . .

The truth is, even a screen of 20,000 compounds is tiny. And those are real, physical, compounds, not “computer screenings”. It’s true, though, that high-throughput screening is a small part of R&D; costs. But the authors are mixing up screening and the synthesis of new compounds. We don’t find our drug candidates in the screening deck – at least, not in any project I’ve worked on since 1989. We find leads there, and then people like me make all kinds of new structures – in flasks, dang it, not on computers – and we test those. Here, read this.

The authors go on to say:

Many products that ‘fail’ would be more accurately described as ‘withdrawn’, usually because trial results are mixed; or because a company estimates that the drug will not meet their high sales threshold for sufficient profitability. The difference between ‘failure’ and ‘withdrawal’ is important, because many observers suspect that companies withdraw or abandon therapeutically important drugs for commercial reasons. . .

Bring out some of those observers, then! And bring on the list of therapeutically important drugs that have been dropped out of the clinic just for commercial reasons. Please, give us some examples to work with here, and tell me how the disappointing data that the companies reported at the time (missed endpoints, tox problems) were fudged. Now, I have seen a compound fall out of actual production because of commercial reasons (Pfizer’s Exubera), but that was partly because it didn’t turn out to be as therapeutically important as the company convinced itself that it would be.

And here’s another part I especially like:

Company financial risk is not only much lower than usually conveyed by the ‘1 in 5000’ rhetoric, but companies spread their risks over a number of projects. The larger companies are, and the more they merge with or buy up other companies, the less risk they bear for any one R&D; project. The corporate risk of R&D; for companies like Pfizer or GlaxoSmithKinen are thus lower than for companies like Intel that have only a few innovations on which sales rely.

Well, then. That means that Pfizer, as the biggest and most-merged-up drug company in the world, must have minimized its risk more than anyone in the industry. Right? And they should be doing just fine by that? Not laying people off right and left? Not closing any huge research sites? Not wondering frantically how they’re going to replace the lost revenue from Lipitor? Not telling people that they’re actually ditching several therapeutic areas completely because they don’t think than can compete in them, given the risks? Not announcing a stock buyback program, because they apparently (and rather shamefully) think that’s a better use of their money than putting it back into more R&D;? I mean, how can Intel be doing better than that? It’s almost like chip design is a different sort of R&D; business entirely.

Well, this post is already too long, and there’s more to discuss in another one, at least. But I wanted to add one more argument from economic reality, an extension of those little questions about Pfizer. If the cost of R&D; for a new drug really were $43 million, as Light and Warburton would have it, and the financial and tax advantages so great, why isn’t everyone pouring money into the drug industry? Why aren’t VC firms lining up to get in on this sweet deal? I mean, $43 million for a drug, you should be able to raise that pretty easily, even in this climate – and then you just stand back as the money gushes into the sky. Don’t you?

Why are drug approval rates so flat (or worse?) Why all the layoffs? Why all the doom and gloom? We’re apparently doing great, and we never even knew.

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like