I've already had some reader mail (see here) about this article in today's New York Times. It starts out looking like a real pharma-bashing exercise. Up to a point, it is - and up to a point, it's deserved, too. But in the end it's a more subtle piece, not that you'd guess that from the opening paragraphs. (I have my own solution to the problem the article raises, and it will bring joy to no one. Read on.)
The issue is comparability of drugs, especially drugs with the same broad mechanism of action. Look at all the statins or antiinflammatories on the market: is there one that's better than the others? Of course, if you listen to the companies that make them and promote them, the answer is clear. Their product is best! But, as in any other industry, that's not the most reliable guide.
The article uses the example of two marketed forms of the protein erythropoetin, one from Amgen, and one from Johnson and Johnson. J&J's product is about one-third the cost of Amgen's. Is there any reason to pay for the more expensive option? Medicare has asked the National Cancer Institute to run a study to answer that question, but (as the Times points out early and often) there is a provision in the latest Medicare legislation that keeps the program from even using such evidence of functional equivalance in its payment decisions. As you'd imagine, Amgen is arguing that this provision makes the planned Medicare/NCI comparison study a moot point. Why compare?
This would seem like an easy call: the drug companies are slamming the door on something that might cut into profits. Hey, I work here, and I'm sure that that was the motivation, too. But I should add the standard comparisons to other industries at this point, though, and note that car makers are not required to prove that their latest models actually work better than the older ones, or better than the competition's. Nikon doesn't have to run head-to-head trials with Canon, nor Gateway with Dell.
I like those examples, but I realize that there are some other considerations. For one thing, we're talking about public funds here, right? Partly, yes, although the managed-care corporations have a big interest in this, too. I'd add that the government spends a lot of money on goods and services that are not required to be comparison tested (but are selected on the basis of lowest bid.) We'll get back to that topic in a couple of paragraphs. The other big factor is that my car and computer comparisons are discretionary purchases. Health care is treated differently. It's an emotional issue, a life-and-death issue, and it's always going to be held to a different standard than other businesses.
So, let's test! But as the article makes clear, it's not as easy to test these things as you'd think:
. . .Rarely are such studies able to answer all the most important questions. The National Cancer Institute has been mulling the appropriate design for the Aranesp-Procrit trial for nearly two years and will probably need another year before starting the test. . . In the end, more than one trial may be needed, Dr. Feigal (of NCI) said.
Dr. Feigal declined to estimate the cost or size of the eventual trial or trials, but similar tests have cost millions of dollars. Indeed, for comparative trials to be the size needed to measure true differences between drugs, they generally need to be large, lengthy and expensive.
Indeed they do. The article goes on to talk about the hypertension drug comparison study that got such play in the media a few months ago - not least from the New York Times itself. It hasn't settled the question, though. There are still real doubts about which therapy is most effective (for one thing, because patients in the study didn't take more than one type of drug, although in the real world this is a common mode of treatment.) This was a huge study already, and adding arms to assess combination therapies would have bulked it up considerably.
Still, I'm in favor of doing some head-to-head tests, because I think that there are several therapies out there that don't offer much for their price. (I'm looking at you, Nexium!) Here's my proposal - and yes, I'm going to go ahead and treat the drug industry unlike any other. If a company wants to bring out a me-too therapy, it will be required to show evidence of whatever factor differentiates it from the existing agents. The company gets to choose the battlefield: more efficacy? Quicker onset? Fewer follow-up visits to the doctor? Whatever. Pick a reason you're going to promote the drug, and come up with data to back it up. I think we'd end up with fewer me-toos on the market, but we'd lose fewer of them than many critics might think. Many times, drugs that look the same can indeed act differently. Admittedly, it would take some careful clinical work to bring some of the differences out, though.
This change would require a major shift at the FDA. For existing therapeutic modes, you'd need to switch at some point from placebo-controlled trials to competition-controlled trials. Perhaps you could run an initial test-the-water placebo control (after all, these are drugs that have a high chance of working), and from then on you run versus the competition. There are complications - which competitor, for example. But it's possible to do, and it's an idea that has been talked about for a long time.
And who's going to pay for all this? Well, you are (if you're a patient, that is.) Believe me, we're going to pass those costs on, and pronto. Raise the regulatory barrier, pay more money: it's a law of nature. And the lost revenue from the me-too drugs, which have higher chances of success (but still aren't sure things!) will be passed on, too. I think that there are still savings to be realized here - but they're not going to be as big as they seem.