« Things I Won't Work With: Frisky Perchlorates |
| That Fount of Information We Call ASCO »
May 31, 2006
As Merck Caroms Off Another Tree
You have to figure that Merck is getting tired of restating and re-explaining its Vioxx numbers. I certainly am. The only people who aren't fed up with it, I'm sure, are the hordes of lawyers for the plaintiffs. They're munching popcorn and waving pom-poms as Merck staggers around in circles.
The latest episode is a statistical mixup of the APPROVe trial, which I last wrote about here. In the NEJM publication of the results and in Merck's submission of the data to the FDA, they mention using log(time) as a variable in their primary statistical method. But they report statistical tests in the paper which used a model with plain old linear time as a variable. Merck says that "The results of diagnostic steps specified in that data analysis plan indicate that the linear test is an appropriate method to assess changes in the relative risk over time", although they'd surely rather not have to backtrack and make that argument.
This issue affects the measurement of the change in relative risk over time, not the magnitude of the risk itself. Merck's taking pains to point out that the overall magnitude of those relative risks were described correctly. That's fine, I guess, as far as it goes. The problem is, Merck has already been making a big deal out of that change in risk with time, namely that patients weren't at risk unless they'd been taking Vioxx for at least 18 months. So this is, unfortunately for them, a relevant issue.
What's the difference come to? For the difference between risk levels before and after the 18-month threshold, Merck reported a p-value of 0.01 using linear time, but if you run the method the way it's actually outlined in the paper (log time), you get p = 0.07, which is certainly worse. In fact, in my experience, you start losing your audience at p values of 0.03 or 0.05, and that's what seems to be happening. When Merck says that this error does not affect the conclusions of the study, they're only partly correct. What it affects are the believability of the conclusions, and once again, the revision makes things look worse for them.
Honestly, guys. What's with you these days?
+ TrackBacks (0) | Category: Cardiovascular Disease
POST A COMMENT
- RELATED ENTRIES
- Sydney Brenner on the State of Science
- A Bit More Realism in Consulting
- Out to Illinois
- A Call To Rein In Phase III Trials
- Computational Nirvana
- A Close Look at Receptor Signaling
- The Instructive Case of Galena Biopharma
- Changes in Papers