Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Tamiflu: Good For Anything, Or Not? | Main | Full Disclosure »

January 19, 2006

But His Name Lives On. . .

Email This Entry

Posted by Derek

I currently have a piece up on the Medical Progress Today site, about what an 18th-century minister has to offer modern clinical trial design. (Statistics groupies will have already guessed the subject matter from that clue)!

Comments (10) + TrackBacks (0) | Category: Clinical Trials


COMMENTS

1. John Johnson on January 20, 2006 7:51 AM writes...

You know, you might have to watch your step on certain university campuses now. Your alma mater would probably welcome you back with open arms, but go 10 miles south to mine, and you might find yourself being followed.

Seriously, though, the frequentists methods seem to work very well in Phase III trials (at least in the efficacy portion), but not so well in the Phase I/II and safety portions of Phase III trials. Specifically, analysis of AE data needs some serious help (as well as lab data), and frequentist methodology has been rather silent on the topic. Yeah, traditionalists like me can yammer on about how Bayesians can just pick their conclusions and "prove" them, but the fact of the matter is that we don't yet offer a very good alternative for safety data, at least the last time I checked.

Permalink to Comment

2. John Johnson on January 20, 2006 8:00 AM writes...

Oh yeah, a couple more comments on this. As far as modifying trial design during the trial, frequentist methodology does have a class of methodologies for this. We call it "group sequential design," and while the FDA has encouraged its use when appropriate, it's still having a hard time catching on. In some cases it, too, is viewed with suspicion even though the sequential probability ratio test has been around for over half a century and has well-established properties that make it a good tool in any statistician's toolbox. However, it is a risk, and it does involve running interim analyses which end up being rather expensive (and you have to handle the information that comes out of it like a hot potato or risk angering several regulatory agencies). You can cut your sample size way down, or you can end up paying for more patients. The decisions that come out of that are limited, though.

1. "Stop now, declare efficacy."
2. "Stop now, declare failure."
3. "Continue to next interim analysis."

There are designs where you can, instead, choose to recalculate sample sizes, but I never recommend these except if there is a very compelling reason to do so and nothing else will do.

Permalink to Comment

3. david lilienfeld on January 20, 2006 9:24 AM writes...

There's a fundamental problem with Bayesian analyses (and the reason no one will go to FDA with a completely Baysian database anytime soon): the selection of the prior and the posterior are as important to the result as the actual data. One can not replicate a Bayesian database, since by definition the prior and posterior will have changed with each additional study.

The situation with safety data is no better. As a drug safety physician, I'd have major qualms about using Bayesian analyses--replication is a problem, and so is interpretation. For whatever the challenges of the frequentist approach, at least its results are understandable and acceptable to most reasonable people. I don't think that would apply to a Bayesian-based analysis. After all, for the clinician, a Bayesian approach has no meaning.

Permalink to Comment

4. Derek Lowe on January 20, 2006 9:29 AM writes...

David, that's a key point, all right. I think that a Bayesian design would be a good fit for a second- or third-in-class drug, where you'd have a good chance of a meaningful prior, but first-in-class would involve a lot of arguing.

Or, as John J. suggests, perhaps it's best used in Phase I for dose-finding and safety. I actually didn't know about group sequential design - it seems as if all these recalculate-on-the-fly designs are computationally intensive, aren't they?

Permalink to Comment

5. Still Scared of Dinosaurs on January 20, 2006 9:50 AM writes...

One problem for making trials use patients more efficiently is that the goal of some trial designs is to use patients inefficiently. I remember tweaking the designs of 2 Phase 3 trials until they each got up to 500 patients because we needed a safety database of 1500 patients to meet ICH guidelines and the company didn't want to start more than two trials to get there. If it weren't for the "FDA 2 well controlled trials" paradigm we probably would have been told to run one trial.
An unfortunate result was that we did end up having to run a number of other trials, and in each of those the pressure reverted to using as few patients as possible to minimize the cost. If management had been more receptive to creative thinking about how to maximize the value to the company of the original 1000 patients we would have been much better off. And while the 1500 patient target is a regulatory requirement, very little of the pressure to run the additional trials came from regulators but rather from people looking out for the business side who were being faced with questions about the commercial profile they couldn't answer.
As far as the question about who's going to make the plunge into all-Bayes NDAs it's probably going to be a small company not because of fewer entrenched frequentists but because of one fully qualified Bayesian who's in charge. It's probably also most likely in an indication where the standards of care are constantly changing. If the underlying probabilities of response and failure change in the midst of a frequentist Phase 3 trial it could be disasterous. It may be salvageable with an amendment, but if a new therapy becomes available that looks as good as the active group in an ongoing trial you may lose all your patients. Perhaps a Bayesian approach could account for this more successfully.
Finally, I don't think that frequentist hostility towards Bayesian methods is a fair description of the situation, a more correct description might be apathy. In my experience actual hostility, when expressed, comes from the Bayesians, though it's entirely analagous to the sentiments Mac users express towards PCs. It's not real hostility, more like high energy exasperation.

Permalink to Comment

6. John Johnson on January 20, 2006 10:34 AM writes...

"I actually didn't know about group sequential design - it seems as if all these recalculate-on-the-fly designs are computationally intensive, aren't they?"

Why, yes, they are. I have a name for this. "Job security." They have a pretty good chance of being less expensive than a standard design (assuming that they are feasible, they aren't feasible for all trials, especially trials where there's a long follow-up time), but if you don't hit your endpoint in the first few interim analyses the additional cost of conducting interim analyses will make it more expensive. They when feasible make very good Phase 2 designs, especially if you can set it up to where you run only one Phase 2 trial.

Prediction: I think it will happen in the next 20 years or so that an "all-Bayesian NDA" will occur, and I think it will stir up some controversy. It will be for one of these special-case NDAs like orphaned drug where the 1500 patient and two well-controlled trial requirements are relaxed and not made by a long shot.

Permalink to Comment

7. John Thacker on January 20, 2006 1:51 PM writes...

One can not replicate a Bayesian database, since by definition the prior and posterior will have changed with each additional study.

Replicate, no. But enough trials and it converges to the same distribution regardless of any (sane) prior chosen, so the selection of the prior is not as important as the actual data. As for "selection of the posterior," I'm not sure I follow. That's a product of the prior and the data.

One could just as easily level criticisms at the experimenter's arbitrary choice of the significance test, whether 5% or 1%, and arbitrary choice of null hypothesis. For me, even more damning (and relevant to experimenters), the frequentists reject optional stopping and the likelihood principle. The fundamental problem is that the design of the experiment is more important to the data.

Consider the following thought experiment (borrowed from Wikipedia for the useful numbers):

Researcher Alice conducts 12 experiments; 9 are successes and 3 are failures. Then she drops dead, having recorded these results but not the experiment's design.

Researcher Bob wants to test the null hypothesis H_0 that success and failure is equally likely against the hypothesis that p > 0.5. Given 12 experiments, the probability that 9 or more successes would be obtained if H_0 were true is 299/4096, or roughly 7.3%. Therefore, we do not reject the null hypothesis at the 5% level.

Researcher Chuck however, believes that the experiment was set up differently. Perhaps Alice ran the experiment until she obtained 3 failures and then stopped. The probability of needing to run at least 12 experiments for this to happen, given H_0, is 134/4096, or 3.27%. Therefore, Chuck would reject the null hypothesis at the 5% level.

I have a hard time believing that our interpretation of probability should depend on the original design of the experiment, rather than the data obtained. I think that's a problem at least as bad as depending on the prior.

It does make a difference. According to frequentism, you technically cannot make an interim analysis of the data. Also, the results can change if the experimenter is allowed to choose the stopping time. According to Bayesian methods, neither of these things are problems.

And yes, I agree with John Johnson. Our alma mater of Duke has lots of Bayesians, but UNC, IIRC, has fewer. (I assume that's his alma mater, but I don't know.)

Permalink to Comment

8. John Thacker on January 20, 2006 2:06 PM writes...

Group sequential design is still inherently subject to manipulation, according to the very tenets of frequentism.

I also have issues with the likelihood-ratio test, like most Bayesians. The use of supremums is rather unjustified, IMO, just like how I have certain issues with MLEs. In addition, I think it's unjustified that the test depends on the probability of extreme events, events more extreme than what was observed. And those probabilities depend on the design of the experiment.

Permalink to Comment

9. Still Scared of Dinosaurs on January 20, 2006 3:04 PM writes...

John Thacker has finally shown me the wisdom of the 2-trial paradigm. If Poor Alice obtained the data as described and Poor Mary repeated the trial, we only have to look at the number of runs and the number of failures to determine the design.
If the results are the same we have to consider the probability of a third researcher agreeing to do the experiment given the outcome of the first two.

Permalink to Comment

10. John Johnson on January 21, 2006 7:12 PM writes...

"And yes, I agree with John Johnson. Our alma mater of Duke has lots of Bayesians, but UNC, IIRC, has fewer. (I assume that's his alma mater, but I don't know.)"

Yes, UNCCH is my alma mater. I do want to note that some members of the faculty were warm to Bayesian ideas, or at least taught them after frequentist methods. Others were hostile. Finally, some of the early work on empirical Bayes was done at UNCCH.

Permalink to Comment


EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
Life Is Too Short For Some Journal Feeds
A New Look at Phenotypic Screening
Small Molecules - Really, Really Small
InterMune Bought
Citable Garbage
The Palbociclib Saga: Or Why We Need a Lot of Drug Companies
Why Not Bromine?
Fragonomics, Eh?