Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Gold and Lasers | Main | Blogroll Revamp »

February 22, 2006

NEJM vs. Its Contributors, Round Two

Email This Entry

Posted by Derek

The original "Expression of Concern" editorial over the VIGOR Vioxx trial in the New England Journal of Medicine was an odd enough document already. But today brought an "Expression of Concern Reaffirmed" in the journal, along with replies from the VIGOR authors.

It's going to take some doing to get these folks together, as you'll see. The NEJM's editors, in their "reaffirmation", add a few details to their December 8th expression. Their position is still that there were three heart attacks in the Vioxx treatment group that were not in the data submitted to the journal. And they're not buying the explanation that these took place after the end of the study, either:

"The authors state that these events did occur during the trial but did not qualify for inclusion in the article because they were reported after a "prespecified cutoff date" for the reporting of cardiovascular events. This date, which the sponsor selected shortly before the trial ended, was one month earlier than the cutoff date for the reporting of adverse gastrointestinal events. This untenable feature of trial design, which inevitably skewed the results, was not disclosed to the editors or the academic authors of the study."

Those academic authors (11 of them from seven different countries, led by Claire Bombardier of Toronto) have a reply to all this in the same issue. Regarding those three MI events, they say:

"The VIGOR study was a double-blind, randomized outcomes study of upper gastrointestinal clinical events. We, as members of the steering committee, approved the study termination date of February 10, 2000, and the cutoff date of March 9, 2000, for reporting of gastrointestinal events to be included in the final analysis. Comparison of cardiovascular events was not a prespecified analysis for the VIGOR study. . .the independent committee charged with overseeing any potential safety concerns recommended to Merck that a data analysis plan be developed for serious cardiovascular events. . .As a result, a cardiovascular data analysis plan was developed by Merck. Merck indicated that they chose the study termination date of February 10, 2000, as the cutoff date. . .to allow sufficient time to adjudicate these events. . . (The three events) were neither in the locked database used in the analysis for the VIGOR paper no known to us during the review process. However, changing the analysis post hoc and after unblinding would not have been appropriate."

The authors go on to say that including the three heart attacks does not, in their view, change the interpretation of the safety data. They also take issue with the journal's contention that the three events were deleted from the manuscript, saying that the table of cardiovascular events in the presubmission draft of the paper never included them in the first place.

The two Merck authors on the paper, in a separate letter, make the same point, and also mention that there was an additional stroke in the naproxen-treated group that didn't make the paper for the same reasons. They reiterate that including the three heart attacks wouldn't have changed anything:

". . .The article clearly disclosed that there was a significant different in the rates of myocardial infarction in the Vioxx and naproxen arms of the study and reported these rates as 0.4 and 0.1, respectively, with a relative risk reported as 0.2. The inclusion of the post-cutoff data myocardial infarctions changes the Vioxx rate to 0.5 but does not meaningfully change the relative risk or the conclusion that there was a significant difference between the two arms of the study. Indeed, with such a small number of events (which were not a primary end point of the study) - and with such wide confidence intervals around them - it is difficult to imagine that this small numerical change could affect the interpretation of the data."

Looking at everything together, I'm still coming down on the side of Merck and their academic collaborators in this part of the fight. The post-launch cardiovascular data on Vioxx and its advertising and promotion are worth debating separately, but as for the VIGOR study, I think the NEJM is overreaching. Still, from Merck's viewpoint, I think the damage has already been done. . .

Update: Y'know, it occurs to me that there are a few people who aren't as upset about all this editorial wrangling: the editors of JAMA and the other top-ranked medical journals. They'll be getting some manuscripts that otherwise would have gone to NEJM.

Comments (25) + TrackBacks (2) | Category: Cardiovascular Disease | The Scientific Literature | Toxicology


COMMENTS

1. Daniel Newby on February 23, 2006 2:23 AM writes...

This is not a wise approach for the NEJM. Sure, it is emotionally satisfying to whoever has the bee in their bonnet. Sure, it gets them lots of publicity in the mass media today. But the cost tomorrow may be high. Good researchers will think twice about putting their name on something that might be slagged for no reason. If the editors in the future need to make a subtle statistical point about something important, many readers will be suspicious. They need to pick their battles more carefully.

Permalink to Comment

2. John Johnson on February 23, 2006 7:37 AM writes...

I'm already struggling enough with issues like this, especially in early phase trials. Thing is, in oncology these off-study deaths unfortunately are the norm, and collecting this data and putting it into one meaningful picture is a lot harder than you would think. I really wonder what this case is going to do for safety reporting and analysis in general.

Permalink to Comment

3. Palo on February 23, 2006 12:46 PM writes...

I know that we are sitting here in the Pharma-cheerleading section, but I usually find Derek's comments to be --even if tilted-- balanced and reasonable. Not so when the NEJM is the subject though. I guess it is the memory of Marcia Angell still pinching somewhere where it hurts.
Give me a break, Derek. The presentation of the study was obviously corrupt. There is no way around it. And, by the way, what would the NEJM have to gain from all this if it wasn't for the only thing they have on the line, their reputation? Unlike those involved in the VIGOR trial, or some of the cheerleaders already booing, they have no financial interest on the issue. You are right on your JAMA speculation: if the NEJM editorial board was to be so out of line, they have a lot to lose, and nothing to gain. I don't buy the 'emotionally satisfying' charge Daniel advances. It is nonsense.

The bottom line still is, as the journal says:
"More than four months before the article was published, at least two of its authors were aware of critical data on an array of adverse cardiovascular events that were not included in the VIGOR article".

Why was the data provided to the FDA but not included in the article? Why? What conceivable reason could there be to present to the medical world the incomplete data? You all know the answer: the doctors who were to be influenced by the 'superior efficacy' of Vioxx read the NEJM, not the FDA website.

It is the NEJM prestige Merck was seeking to validate Vioxx. It is the NEJM prestige that its editors are protecting. Plain and simple.

Permalink to Comment

4. Derek Lowe on February 23, 2006 2:31 PM writes...

Palo, I agree with you that a concern for the journal's prestige is one of the things driving the editorial staff's behavior. But it's that word "critical" in your quote from the editorial that gets me. I think that the authors - both academic and industrial - make a good case that the extra cardiovascular data would not have affected the conclusions of the paper at all.

If that's the case, then the NEJM's point becomes a bit more rarefied. The risk statistics are virtually identical, so it must be the principle of the thing, right? And if they're going to make these stand-on-principle arguments, it seems to me that they'd have had an even bigger reason to complain if the VIGOR study had changed its data workup procedures after the original closure date.

Once you start doing that sort of thing, how do you know when to stop? Well, you don't - which is why there are such things as pre-established cutoffs for data collection.

Ah, one might say, but maybe it's OK to do that if you're courageous and honest enough to include the extra data when it makes your drug look worse? I disagree, naturally, but that argument fails on practical grounds as well: the extra MI events don't really make Vioxx look any worse than it did already.

So no, I don't think that the presentation of the data was corrupt. Corruption would have been changing the data collection plan after the data came in, or leaving out data that would have changed the statistics.

Permalink to Comment

5. tgibbs on February 23, 2006 2:38 PM writes...

I lean toward the explanation that the editors of NEJM are simply statistically illiterate. This is supported by the nonsensical language or the Reaffirmation, e.g. "Thus, the prevention of 65 upper gastro-intestinal events (of which 21 were complicated ― i.e., perforation, obstruction, or severe upper gastrointestinal bleeding) came at the cost of 27 additional serious thromboembolic events in the rofecoxib group," which displays little comprehension of the statistical uncertainties involved in interpretation of this data. The Reaffirmation fails completely to address the authors' telling point that inclusion of the data fails to alter any of the study's conclusions regarding statistical significance, probably because the editors don't really understand the statistical issues involved. Neither is it clear that the editors are cognizant of the fact that arbitrarily extending the close date to include these additional events would have introduced a statistical bias.

The Reaffirmation also fails to address the impropriety of rushing into print with the original expression of concern without first communicating with the authors and obtaining a response to be published in the same issue.

I agree that the NEJM's disgraceful handling of this situation is likely to make researchers think twice before submitting papers to NEJM.

Permalink to Comment

6. Palo on February 23, 2006 2:45 PM writes...

Derek,
Many would likely disagree with your contention that the data would have changed nothing, and obviously the NEJM does. If you are correct, why did they intentionally left it out? Remember, the data was deleted from a first draft. I also think the cut-off date for data collection is a red herring. It is irrelevant to an academic publication. All it matters is what you knew to be true at the time of publication. There is no rule saying that the results from a clinical trial have to be published in a journal like NEJM, or published in any journal for that matter. If to be used in commercial applications, the only rules belong to the FDA approval process. The only reason you would publish your study in an academic journal is to inform the medical community of the effect, efficacy and safety of the tested drug. What possible purpose serves to provide your audience with incomplete and potentially misleading data? The authors should have included the data in the paper, or at least as a postscript addendum if their intention was, in any way, to honestly inform healthcare professionals about the drug. Would anyone believe that if the missing data was Vioxx-friendly the authors would have excluded it anyway because of 'cut-off' dates? Please. They clearly behaved in an unethical way that deserves condemnation, not justification. To make matters worse, not only they did not include all the data, the NEJM found out that they intentionally deleted the data from an earlier draft of the paper. That goes beyond unethical in my view.

Permalink to Comment

7. PS on February 23, 2006 5:10 PM writes...

I agree with Palo that if Merck had some Vioxx friendly data, they would have made every attempt to include it.

And if including the MI data didnt change the conclusions of the study, why not just include it, with the postscript that the events occured after the cut of dates.

Permalink to Comment

8. Derek Lowe on February 23, 2006 5:22 PM writes...

Palo, have you actually read the correspondence in the NEJM? The authors state that those results were not deleted from the manuscript, and that the editors of the journal are wrong when they contend otherwise, because the three MI events were never in the database used to prepare the paper in the first place. They go into great detail on this point.

You can tell from the tone of the letters that both the academic and industrial authors are extrememly upset at the accusations of what would indeed be unethical behavior. . .behavior which they completely deny.

Permalink to Comment

9. Daniel Newby on February 23, 2006 5:41 PM writes...

Paolo said "I also think the cut-off date for data collection is a red herring. It is irrelevant to an academic publication. All it matters is what you knew to be true at the time of publication."

Not true, for two reasons. The first is that what you know depends on analysis, not just data. Studies like this don't just dump a pile of raw numbers out of a database. Instead each individual event is reviewed closely for causation and meaning. For example, if a subject had received an accidental vitamin K overdose (which forces blood to clot for no reason) and then had a heart attack, that event would likely be stricken from the results (with a paragraph to explain why). Careful analysis takes time, which means you can't just add more results at the last minute.

The second issue is 'blinding'. Experience has proven that if patients or analysts know who got which drug, they will deceive themselves. It doesn't matter what their intentions are, or whether the results make the drug look good or bad: knowing skews their thinking. The only way to get scientific results is with a double-blind trial: neither the patients nor the analysts know who got which treatment. This is why research studies always act so proud about being double-blind. Adding more data after unblinding would convert the exercise into yet another unscientific 'open label' study, with little point in being published.

Paolo said "Remember, the data was deleted from a first draft."

That has not been established. For one thing, IIRC the claim was that only the Vioxx numbers had been edited. However the additional data also included a naproxen event. The NEJM position is therefore self-contradictory. For another thing, the file in question was likely under informal control, and the NEJM is unlikely to have professional data forensics skills in-house. It is unwarranted to draw scientific conclusions from the extant file analysis.

Permalink to Comment

10. tgibbs on February 23, 2006 6:02 PM writes...

There is a good statistical reason for having a firm deadline, particularly when events are sparse. I don't know what order these late events occurred, but suppose that late naproxen stroke came in before the CV events. Then Merck could decide to close the door right after that event came in. On the other hand, if a CV event came in first, they might decide to leave the door open a bit longer in the hopes of picking a naproxen event to balance it. These are the sorts of mistakes that even honest researchers can make out of unconscious bias, and which can potentially lead to bogus conclusions. This is particularly a concern late in the study, when researchers are no longer blind to which group is which, so the best way to protect against bias is to decide in advance upon a firm deadline and stick to it.

Permalink to Comment

11. Palo on February 24, 2006 11:27 AM writes...

Derek, Daniel,
The NEJM said in their first "expression of concern":

In reviewing the diskette, we learned that data on cardiovascular events had been deleted from the manuscript before it was submitted

It is clear that you guys believe Merck, and I believe the NEJM. Merck has a lot in gaining from lying, the NEJM Editorial Board has nothing.

Daniel keeps advancing the same technicalities to excuse Merck's behaviour. The bottom line, in my view, still stands: if you publish to inform professionals, you present all you know at the time of publication. He is right in that data alone means nothing without interpretation. But it is arrogant (or suspicious, depending on your bias) for Merck to decide that the 'correct' analysis is the one in which they pick the data to present. Clearly, other people think the missing data is highly relevant. That in itself tells you that it should have been included, at least to have a debate on its meaning.

Permalink to Comment

12. Derek Lowe on February 24, 2006 1:01 PM writes...

Palo, I doubt if we're going to agree on this issue. The thing is, I'm not only believing Merck, I'm also believing almost a dozen academic co-authors from institutions around the world, who are strongly denying that any data were deleted from the manuscript and appear pretty upset by the implication that it was.

Permalink to Comment

13. Palo on February 24, 2006 1:20 PM writes...

Derek,
you seem a rational and reasonable person. I doubt that you really, really, believe that the NEJM made up a story about a diskette and a document with the track changes showing precisely the data that turned out to be true but was not included in the intended final document. I refuse to believe you believe that.
Plus, the journal charges that at least 2 authors knew about it, it did not charge all authors knew about it. It is easily conceivable that the ones that did not know are upset. Besides, beign upset as a metric for 'truth' is kind of naif... isn't it?

Permalink to Comment

14. Daniel Newby on February 24, 2006 4:03 PM writes...

Palo said "The bottom line, in my view, still stands: if you publish to inform professionals, you present all you know at the time of publication."

If that is your goal, you have to make the study as simple as possible, with a concise summary. Ideally someone should be able to learn the truth just by reading the 'Conclusions' section of the paper.

And that is what Merck did. The paper said plainly and clearly that Vioxx was dangerous (relative to naproxen). The risk fairly leaps off the page. There was no doubt, no cover-up.

Which is why many of us are so surprised. The original paper says "Vioxx is a risk (92% confidence level)" and the NEJM shouts "YOU LIARS! IT WAS ACTUALLY 94.3% CONFIDENCE! SCIENTIFIC FRAUD! COVER UP!"

Permalink to Comment

15. Palo on February 24, 2006 7:08 PM writes...

Daniel,

There's two different points here. One issue is whether the missing data resulted in significant statistical error, the other relates to NEJM's ethical concern about the missing data.

The first issue seems debatable. The second not so much. The first depends on the data itself, the second on the decision by the authors of the VIGOR paper to leave data out of a peer-reviewed publication.

Now, if you want to argue only about the first issue, at least before calling the NEJM editors fools make sure you consider all their claims and not only the bits that help your case.

according to the NEJM:

Two tables and a figure that are representative of the data in that document are shown in Supplementary Appendix 1 (available with the full text of this article at www.nejm.org). The data indicate that there were 47 confirmed serious thromboembolic events in the rofecoxib (Vioxx) group and 20 in the naproxen group. The VIGOR article reported 56 upper gastrointestinal events in the rofecoxib group and 121 in the naproxen group. Thus, the prevention of 65 upper gastrointestinal events (of which 21 were complicated ― i.e., perforation, obstruction, or severe upper gastrointestinal bleeding) came at the cost of 27 additional serious thromboembolic events in the rofecoxib group (see Supplementary Appendix 2). Prevention of one complicated gastrointestinal event was offset by the occurrence of about one serious thromboembolic event. Although the information in the internal Merck memorandum3 was reported to the FDA and posted on its Web site three months after publication of the VIGOR article,4 it was not made available to the Journal editors during the manuscript review process. Because these data were not included in the published article, conclusions regarding the safety of rofecoxib were misleading.

http://content.nejm.org/cgi/data/NEJMe068054/DC1/1

Maybe it was, as you say, a way "to make the study as simple as possible, with a concise summary". Maybe it was to save the journal or Merck the extra pages costs, who knows.

Permalink to Comment

16. Palo on February 24, 2006 7:48 PM writes...

Maybe I'm not being clear enough. There seems to be an emphasis of those that defend the presentation of the VIGOR study on the "data collection plan", pre-declared 'cut-off' dates and such. This seems to me a legalistic and artificial argument. Merck's data collection timeline is completely irrelevant to the journal's peer-review process. As I tried to argue before, there's no requirement for Merck to publish the study in a peer-reviewed journal. It is obvious that for Merck, and Pharmaceutical companies in general, publishing in JAMA or NEJM is part of the marketing effort (which is fine, they are business). But if Merck wants to publish in NEJM, it should adhere to the peer-review process, not to its own contrived plan.

Permalink to Comment

17. tgibbs on February 25, 2006 6:49 PM writes...

But if Merck wants to publish in NEJM, it should adhere to the peer-review process, not to its own contrived plan.

I don't understand what you are talking about. One of the jobs of a peer reviewer is to insist upon proper statistical procedures. So if I was asked to review such a mansucript, I would insist upon removal from the analysis of any data that was outside the pre-specified data collection limits. Inclusion of such data has the potential to bias conclusions. Admittedly, in this particular case, inclusion of the additional data does not alter the conclusions in any meaningful way, but it would still have been an error to do so.

Permalink to Comment

18. Jim Hu on February 25, 2006 8:21 PM writes...

Palo,

Speaking for myself, I don't think you're being unclear...I just think you're wrong.

if Merck wants to publish in NEJM, it should adhere to the peer-review process, not to its own contrived plan.

The peer-review process could have rejected the paper based on an inadequate (or, IMHO inadequately described) data collection plan. It didn't, probably because cardiovascular effects were NOT the main point of the paper, but if NEJM had rejected it then I would not have a problem.

There's a big difference between saying that a study does not meet the standards of rigor set by our reviewers and saying that the authors committed scientific misconduct.

On my blog, in an earlier post, I noted that Bombardier et al. were sleazy in how they described the cardiovascular events in the text...without saying anything inaccurate, they obscured the 8% MI for Vioxx vs. ZERO for Naproxen in the aspirin-indicated subpopulation. If that had been clearer, a reviewer might have asked whether that effect is within statistical expectations of the known protective effects of Naproxen. I don't know the answer to this. But that would not be affected by the extra 3 events, which were in the other subpopulation (and as we have been saying over and over, had no statistically significant effect on the relative risk in that subgroup) and if it should have been reworded or put into a table, then NEJM and its reviewers share some of that responsiblity.

IMHO, NEJM is using the statement of concern to obfuscate the fact that they let the paper through in the first place. The review process is not as rigorous as it should be, at NEJM or anywhere else, because the peer reviewers have a hard time scraping up the time to do the job as thoroughly as they should. In comments to Derek's earlier post, various people argued that there is not a plausible legal case against NEJM from the plaintiff's bar. But it isn't clear to me whether a plausible case was needed to spook them into running for cover by piling on wrt Merck.

Permalink to Comment

19. palo on February 26, 2006 4:22 PM writes...

Jim Hu says:
The peer-review process could have rejected the paper based on an inadequate (or, IMHO inadequately described) data collection plan.

It doesn't make any sense. How would the reviewers know that there was extra data not included in the submission? Why would they care at all about Merck's data collection plan? It's nonsense. Merck's plan is irrelevant. A peer-review submission demands that you present all the relevant data you have, not data that fits an artificial deadline.

same with this from tgibbs:
So if I was asked to review such a mansucript, I would insist upon removal from the analysis of any data that was outside the pre-specified data collection limits.

again, nonsense. This is a scientific journal. If you have data that contradicts some of your points, you present it. You don't, you are misleading the audience.

If you claim the extra deaths and the 27 additional serious thromboembolic events are meaningless, it's your interpretation, and very peculiar at that. In any case, it was a reviewer's job to decide if that extra data was meaningless, not yours.

Permalink to Comment

20. tgibbs on February 26, 2006 11:30 PM writes...

Palo says:
again, nonsense. This is a scientific journal. If you have data that contradicts some of your points, you present it. You don't, you are misleading the audience.

Whether the data supports or contradicts the authors' conclusions is a statistical question that can only be evaluated if proper statistical procedures were followed. These include setting a firm end date in advance and sticking to it, so that it is clear that the authors' decision of where to cut off the study could not have been influenced by the data itself.

If you claim the extra deaths and the 27 additional serious thromboembolic events are meaningless, it's your interpretation, and very peculiar at that. In any case, it was a reviewer's job to decide if that extra data was meaningless, not yours.

The meaning of any of the events is a statistical question, dependent upon use of proper statistical procedures, such as firm criteria as to which data are to be included. I have on occasion seen the authors of a paper make comments that were based upon an after-the-fact reanalysis of data that deviated from the original statistical plan. Some reviewers may allow this departure from statistical rigor (although others would not), but it is improper to draw conclusions based upon such an analysis; at most, it can suggest hypotheses to be tested in a future study. But in this particular case, it is difficult to see what more the authors could have said. Inclusion of the data after the cutoff would not have made anything statistically significant that was not significant to begin with--all that it would have done was undermine the statistical validity of the conclusions. It would not even have suggested anything that was not already suggested by the study--namely that there was reason for concern regarding cardiovascular events

Permalink to Comment

21. tango on February 27, 2006 6:32 AM writes...

Palo says:
"If you have data that contradicts some of your points, you present it. You don't, you are misleading the audience."

Qualitatively speaking, I don't see how the NEJM's audience was misled by the exclusion of the three extra CV events. Adding the extra events doesn't contradict the observation in the paper that there is a difference in CV events between Vioxx and naproxen. Sure, the authors spun the conclusion to infer that the difference presented in the paper was due to the cardioprotective effects of naproxen. But this fooled neither the cardiologists out there nor the FDA, which added a warning after reviewing the VIGOR data.

Permalink to Comment

22. DRogers on February 27, 2006 7:44 PM writes...

Trying to step back from some of the more technical arguments here, my gut tells me that if I was reporting data on a rare event, I would report all the raw events I had, even if I only used a subset of the raw data in my ultimate statistical analysis. Leaving out raw data seems, well, cheap. Heck, in some papers I've read, the raw data is the only useful thing in it. Damned if I hate it when scientists have already given "raw" data a little rub-down.

(Take this as a personal point of frustration from a scientist forced to review paper after paper with results drawn from data sets that have clearly been pre-processed in some unexplained manner; forcing me to take their analysis, at some level, on faith.)

Given that the "events" we're talking about are patient deaths, I think this marches from the broad expanses of cheap to the swampy marshes of unethical.

Perhaps no results would have been changed if the data had been included; perhaps the FDA would have added the same warning; perhaps no additional lives would have been saved by the realization of the size of the effect given the widespread use of the drug. But I'm glad I'm not one of the authors, because that's the kind of thing that would eat me up.

Permalink to Comment

23. Jack Friday on February 28, 2006 3:07 AM writes...

For me, this is the most telling quote:

As part of our expression of concern, we also pointed out that three myocardial infarctions in the rofecoxib group were not included in the data submitted to the Journal. The authors state that these events did occur during the trial but did not qualify for inclusion in the article because they were reported after a “prespecified cutoff date” for the reporting of cardiovascular events. This date, which the sponsor selected shortly before the trial ended, was one month earlier than the cutoff date for the reporting of adverse gastrointestinal events. This untenable feature of trial design, which inevitably skewed the results, was not disclosed to the editors or the academic authors of the study.

An undisclosed untenable feature!

Permalink to Comment

24. Jim Hu on February 28, 2006 10:14 AM writes...

Palo,

Since you keep telling us what the standards are for scientific journals, I gotta ask: do you have special knowledge about journals? Or are you just talking about what you think journals should do based on your lay POV? You seem awfully confident in pronouncing that anything that disagrees with you is "nonsense". By your data inclusion standard, NEJM should require all clinical trials to include a "note added in proof" listing all the events that happened between the prespecified cutoff dates and when the galleys go to the printer.

Jack Friday,

This is part valid and part spin. The prespecified cutoff date, by definition, determines whether or not the events occurred during the study of cardiovascular events. When NEJM says that the events occurred during the study they're saying that the events happened during the month when cardio was done and gastrointestinal was still gathering data. IMHO, the authors should have made that detail of the study clearer in the paper, and the NEJM and its reviewers should have demanded a clearer methods section, which would have revealed that. Having both reviewed and edited papers for a nonmedical scientific journal, I can understand why that didn't happen.

But if I understand double-blind studies correctly, the database was locked in order for the analysis to start on the cardiovascular events a month earlier...because the independent board told them to do it that way. The authors claim that this was done because analyzing the cardio events would take longer. I have no way to evaluate that claim, but I don't see why it would be unreasonable.

Here's a thought experiment. Suppose the authors had decided to delay submission of the paper in order to synchronize the cutoff dates for cardiovascular and GI parts of the study. The extra 3 events on the Vioxx side would have been included, along with the 1 extra event on the Naproxen side. The relative risk would not have changed in a statistically significant way. I'm betting you guys would be saying "Merck delayed release of study showing Vioxx heart risk"

Permalink to Comment

25. wlm on July 8, 2006 5:50 PM writes...

To Jim Hu:
As I understand it, while the independent board did tell Merck to track cardiovascular events, it did not tell them to set the CV cutoff earlier than the gastrointestinal cutoff.

That was their own idea. And it's not obvious to me that evaluating CV events is more difficult than evaluating GI events (though if it is, that could justify the earlier cutoff date).

Permalink to Comment

TRACKBACKS

Listed below are links to weblogs that reference NEJM vs. Its Contributors, Round Two:

Given my earlier post, readers will not be surprised that I'm with Derek Lowe and the others in the "Pharma-cheerleading section" on the duelling statements from the editors (pdf) and the authors (pdf) regarding Merck's VIGOR paper. NEJM is standi... [Read More]

Tracked on February 24, 2006 1:55 AM

Does extra data always get you closer to the truth? from Adventures in Ethics and Science
As expected, Derek Lowe has a thoughful post (with a very interesting discussion going on in the comments) about the latest "Expression of Concern" from the New England Journal of Medicine about the VIGOR Vioxx trial. To catch you up... [Read More]

Tracked on February 24, 2006 4:26 PM

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
XKCD on Protein Folding
The 2014 Chemistry Nobel: Beating the Diffraction Limit
German Pharma, Or What's Left of It
Sunesis Fails with Vosaroxin
A New Way to Estimate a Compound's Chances?
Meinwald Honored
Molecular Biology Turns Into Chemistry
Speaking at Northeastern