And while we're on the subject of clinical trials, and the headaches associated with them, this is a neat little article over at Slate on the subject. Darshak Sanghavi from UMass does a good job of explaining the surrogate-endpoints problem in clinical results, relating it to reality TV:
. . .In the federal Multimodal Treatment Study, hundreds of kids with ADHD, whose families were desperate enough to enroll them in a randomized study, entered a well-funded and highly supervised National Institute for Mental Health program complete with specialized therapy, regular evaluation by developmental experts, and careful drug prescription—a setup that's about as realistic as a date on The Bachelor. Within that very unusual, closely monitored environment, as reported in 1999, stimulant medications caused modest improvement after about a year. In response, use of these products surged nationwide, and Ritalin and its peers became household brands. But in March, the researchers described what happened after the lights went out. In their subsequent years in the real world, the drug-treated kids ultimately ended up no better off than the others.
Epidemiologists call this the problem of "surrogate endpoints," and it's no surprise to fans of reality television. Garnering the greatest number of text-messaging votes after a brief performance doesn't always mean you'll be a successful pop star; winning the final rose after an on-air courtship doesn't mean you'll have a happy marriage; and getting higher scores on a simple rating scale of attention-deficit symptoms doesn't mean you'll later succeed in school. In medicine, this problem happens all the time.
He doesn't shy away from some of the big surrogates in the clinical world, the biggest of which are cholesterol levels. That one, as he says, is at least considered a validated marker (with some relation to real-world mortality and morbidity), but there's plenty of room to argue about that, too. Ask Gary Taubes, who has a lot of provocative things to say about the whole low-fat idea. And if that one is still worth arguing over, what about the less validated endpoints?
In the end, I agree with Sanghavi that we really don't have any good alternatives yet. The real endpoints, in most cases, just take too long to measure. No one can finance a twenty-year clinical trial, and no one would put up with one even if it were feasible. We're stuck with what we have, and we just have to make it work the best we can.