Here's a topic that's come up here before: for a new cancer drug, how much benefit is worthwhile? As it stands, we approve things when they show a statistically meaningful difference versus standard of care (with consideration of toxicology and side effects). But should our standards be higher?
That's what this paper in the Journal of the National Cancer Institute is proposing. The authors look at a number of recent Phase III trials for metastatic solid tumors. It's a tricky business:
When designing a randomized phase III clinical trial, the investigators must specify in the protocol the difference (δ) in the primary endpoint between experimental and control groups that they aim to detect or exclude (24). The number of patients to be recruited and the duration of the study will depend on the value of δ; increasing the sample size will allow the detection or exclusion of smaller values of δ. Ideally, trials should be designed such that δ represents the minimum clinically important difference, taking into account the tolerability and toxicity of the new treatment, that would persuade oncologists to adopt the new treatment in place of the standard treatment. Of course, the opinions of oncologists as to what constitutes a minimal important value of δ will vary, but a reasonable consensus can be reached by seeking the opinions of oncologists who manage a given type of cancer. For example, an increase in median survival by less than 1 month for patients with advanced-stage cancer would not be regarded by most as clinically important, unless the new agent had less toxicity than standard treatment, whereas an improvement of median survival by greater than 3 months for a drug that was reasonably well tolerated would usually be accepted as clinically important.
And the problem is, given the costs of some of these drugs versus their benefits, you run the risk of, finally, paying too much for too little. I know that people say that you can't put a cost on a human life, but that's probably not true, when you're talking about an entire economy. As the article points out, the rough estimate is that the developed world can support expenditures of up to roughly US $100,000 per year of life gained, but past that, we're into arguable territory. (If someone wants to spend more out of their own pocket, that's another matter, naturally, but at these levels, we're usually talking public and private insurance).
The benefits can indeed be marginal, and you have to look at the statistics carefully so as not to be misled:
. . .several trials showed a statistically significant difference in a major outcome measure between the experimental and control groups, but the difference in outcome was of lower magnitude (eg, hazard ratio was closer to one) than that specified in the protocol. For example, the clinical trial that led to approval of erlotinib for treatment of pancreatic cancer was designed to detect a relative risk reduction of 25% (HR ≤ 0.75), but the best estimate of hazard ratio from the trial showed a relative risk reduction of 18% (HR = 0.82, 95% confidence interval = 0.69 to 0.99). The difference was statistically significant (P = .038), but the median survival differed by only 10 days.
What happens is that the trials are (understandably enough) designed to detect the minimum difference that regulatory authorities are likely to find convincing enough for approval of the drug. And the FDA has generally set the bar at "anything that's statistically significant for overall survival". These authors (and others) would like to see that raised. They're calling for trials not to go for a statistically significant P value, so much as to show some sort of meaningful clinical benefit - because it's become clear that you can have the first without really achieving the second.
I think that might be a good idea, whether or not you buy into that cost-per-year-of-life figure or not. At this point, I think it's fair to say that we can come up with drugs that provide some statistical measure of efficacy, given enough effort in the clinic, for many kinds of cancer (although certainly not all of them). But how many add-a-month-maybe therapies do we need? Not everyone's convinced, though:
Wyndham Wilson, a lymphoma researcher at the National Cancer Institute in Bethesda, Maryland, argues that the proposed clinical endpoints are somewhat arbitrary. “What constitutes a clinically meaningful difference? Six months is obvious, but where do you cut the line?” What's more, he adds, simply focusing on median responses often ignores important outlier effects that could merit approval for an experimental drug. “The difference in overall survival may not be great, but it may be driven by a great benefit to a small group,” he says.
Problem is, it's often quite difficult to figure out who that small group might be, and just treat them, instead of treating everyone and hoping for the best. And there's always the argument that these therapies are stepping stones to more significant improvements, but I wonder about that. My impression of oncology research has always been more like "OK, this looks reasonable. Lots of these tumors have UVW upregulated; let's make an UVW inhibitor. (Years later): Hmm, that's disappointing. Our UVW inhibitor doesn't seem to do as much as you'd think it should. But now it's been found that XYZ looks like it's necessary for tumor growth; let's see if we can inhibit it. (Years later): Hmm, that's not as big an effect as you would have thought, either, is it? Seems to help a few people, but it's hard to say who they'll be up front. How's the JKL antagonist coming along? No one's tried that yet; looks like a good cell-division target. . ."
It's just sort of one thing after another - that one didn't work so well, neither did that one, this other one and these three together seem to be a bit better, but not always, and so on. Would we learn as much, or nearly so, just from the earlier clinical work on such compounds as opposed to taking them to market? And although you can't deny that there's been incremental progress, I'm not sure what form it's taking. It's very likely that the answer isn't to keep turning over mechanistic ideas until we find The One That Really Truly Works - cancer is a tough enough (and varied enough) disease that there probably isn't going to be one of those.
My guess is that meaningful cancer success will come from combinations of therapies that we mostly don't even have yet. I think that we'll need to hit several different mechanisms at the same time, but that some of what we'll need to hit hasn't even been discovered. And on top of that, each patient presents a slightly different problem, and ideally would receive a more customized blend of therapies (not that we know how to do that, either, in most cases).
What I'm saying is that we'll probably need combinations of things that already work better than most of what we have already, and that these will stand out enough in clinical trials that we'll know that they're worth developing. As it stands, though, companies see hints here and there in the clinic, enough to run a Phase III trial, and if it's large enough and tightly controlled enough, they see enough efficacy to get things through the FDA and onto the market. Would we be better off to not proceed with the marginal stuff, and put the significant amounts of money into things that stand out more? Or would that choke off the market too much, since we mostly end up making marginal things anyway (damn it all), leaving no one able to keep going long enough to find the good stuff? It's a hard business.