Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Stephanie Kwolek, 1923-2014 | Main | Go Home, Gaijin »

June 23, 2014

The Virtual Clinical Trial: Not Quite Around the Corner

Email This Entry

Posted by Derek

Here's one of those "Drug Discovery of. . .the. . .Future-ure-ure-ure" articles in the popular press. (I need a reverb chamber to make that work property). At The Atlantic, they're talking with "medical futurists" and coming up with this:

The idea is to combine big data and computer simulations—the kind an engineer might use to make a virtual prototype of a new kind of airplane—to figure out not just what's wrong with you but to predict which course of treatment is best for you. That's the focus of Dassault Systèmes, a French software company that's using broad datasets to create cell-level simulations for all different kinds of patients. In other words, by modeling what has happened to patients like you in previous cases, doctors can better understand what might happen if they try certain treatments for you—taking into consideration your age, your weight, your gender, your blood type, your race, your symptom, any number of other biomarkers. And we're talking about a level of precision that goes way beyond text books and case studies.

I'm very much of two minds about this sort of thing. On the one hand, the people at Dassault are not fools. They see an opportunity here, and they think that they might have a realistic chance at selling something useful. And it's absolutely true that this is, broadly, the direction in which medicine is heading. As we learn more about biomarkers and individual biochemistry, we will indeed be trying to zero in on single-patient variations.

But on that ever-present other hand, I don't think that you want to make anyone think that this is just around the corner, because it's not. It's wildly difficult to do this sort of thing, as many have discovered at great expense, and our level of ignorance about human biochemistry is a constant problem. And while tailoring individual patient's therapies with known drugs is hard enough, it gets really tricky when you talk about evaluating new drugs in the first place:

Charlès and his colleagues believe that a shift to virtual clinical trials—that is, testing new medicines and devices using computer models before or instead of trials in human patients—could make new treatments available more quickly and cheaply. "A new drug, a succesful drug, takes 10 to 12 years to develop and over $1 billion in expenses," said Max Carnecchia, president of the software company Accelrys, which Dassault Systèmes recently acquired. "But when it is approved by FDA or other government bodies, typically less than 50 percent of patients respond to that therapy or drug." No treatment is one-size-fits-all, so why spend all that money on a single approach?

Carnecchia calls the shift toward algorithmic clinical trials a "revolution in drug discovery" that will allow for many quick and low-cost simulations based on an endless number of individual cellular models. "Those models start to inform and direct and focus the kinds of clinical trials that have historically been the basis for drug discovery," Carnecchia told me. "There's the benefit to drug companies from reduction of cost, but more importantly being able to get these therapies out into the market—whether that's saving lives or just improving human health—in such a way where you start to know ahead of time whether that patient will actually respond to that therapy."

Speed the day. The cost of clinical trials, coupled with their low success rate, is eating us alive in this business (and it's getting worse every year). This is just the sort of thing that could rescue us from the walls that are closing in more tightly all the time. But this talk of shifts and revolutions makes it sound as if this sort of thing is happening right now, which it isn't. No such simulated clinical trial, one that could serve as the basis for a drug approval, is anywhere near even being proposed. How long before one is, then? If things go really swimmingly, I'd say 20 to 25 years from now, personally, but I'd be glad to hear other estimates.

To be fiar, the article does go on to mentions something like this, but it just says that "it may be a while" before said revolution happens. And you get the impression that what's most needed is some sort of "cultural shift in medicine toward openness and resource sharing". I don't know. . .I find that when people call for big cultural shifts, they're sometimes diverting attention (even their own attention) from the harder parts of the problem under discussion. Gosh, we'd have this going in no time if people would just open up and change their old-fashioned ways! But in this case, I still don't see that as the rate-limiting step at all. Pouring on the openness and sharing probably wouldn't hurt a bit in the quest for understanding human drug responses and individual toxicology, but it's not going to suddenly open up any blocked-up floodgates, either. We don't know enough. Pooling our current ignorance can only take us so far.

Remember there are hundreds of billions of dollars waiting to be picked up off the ground by anyone who can do these things. It's not like there are no incentives to find ways to make clinical trials faster and cheaper. Anything that gives the impression that there's this one factor (lack of cooperation, too much regulation, Evil Pharma Executives, what have you) holding us back from the new era, well. . .that just might be an oversimplified view of the situation.

Comments (15) + TrackBacks (0) | Category: Clinical Trials | In Silico | Regulatory Affairs | Toxicology


COMMENTS

1. Dr Mark on June 23, 2014 8:47 AM writes...

Did not ceritinib short cut the process? Preclinical molecular structure studies suggested the molecule, a phase I trial confirmed the molecular prediction, and FDA approval came in record time.
I may be wearing rose colored glasses, but maybe a new day is dawning. Up to now, much oncology drug development was empirical; as our understanding improves we might be able to find actionable pathways before just giving some stuff to people to see what happens.

Permalink to Comment

2. Frank Adrian on June 23, 2014 9:10 AM writes...

To be pedantic, you'd probably want a delay effect plus a reverb to get both the echo effect and the sense of spaciousness in the future...ure...ure, but your main point stands... It would be great if this were here, but it's just not here yet.

Permalink to Comment

3. pgwu on June 23, 2014 9:19 AM writes...

At this stage, they probably can simulate the binding equivalent of deltaH but not deltaS.

Permalink to Comment

4. Anonymous on June 23, 2014 9:24 AM writes...

@1 What does 'preclinical molecular structure studies' even mean? So far as I'm aware, there is no such thing as a program that can predict FDA approval.

Permalink to Comment

5. old hand on June 23, 2014 9:26 AM writes...

When they get around to predicting the response in animals pre-clinically then I will start to believe there is hope in the clinic. A related issue that you see with all predicted properties/outcomes is that often you are told something should not work and you do not pursue it when the data to support that outcome is not solid. There will have to be a lot of validation work done before I am convinced

Permalink to Comment

6. boo on June 23, 2014 10:08 AM writes...

Behind this, you will find the work of public relations/investor relations professionals trying to convince people of the wisdom of Dassault's acquisition of Accelrys.

You can be sure that the scientific and technical staff at Accelrys are shaking their heads, slowly and sadly.

Nothing to see here, everyone; move along.

Permalink to Comment

7. Anon electrochemist on June 23, 2014 10:16 AM writes...

I see Daussault's approach as two-pronged:

1. Get a handle on variability for trials. There aren't a lot of off-the-shelf tools for trial design and analysis. They already changed the world when they brought the CAD capabilities of Boeing into the reach of every undergrad engineer.

2. A continuously variable model of disease progression. There's no need for "Stage X of Disease Y", when you can pool diagnostic criteria and create automated treatment recommendations. This data already exists, the question is whether or not it saves time/money. For chronic problems where you can easily get lots of data points(kidney/liver/heart failure) and there are a variety of therapies, I could see it being useful. Reduction of malpractice claims by a few percent would be more than worth it.

Permalink to Comment

8. Anonymous on June 23, 2014 11:08 AM writes...

One can simulate Glycolysis and the Krebs cycle. Everything else (on pathway/cell/tissue/whatever level) lacks data and/or understanding. No in-silico model (until now) is able to describe or predict complex endpoints like dose, efficacy or safety. Having said so they (already) may help and support decision making, but in most cases it is more an art than science. This is big time PR.

Permalink to Comment

9. DrSnowboard on June 23, 2014 11:13 AM writes...

Errr... If you take the computer algorithm out of this , isn't it just what chinese medicine and ayurvedic medicine have been doing for millenia? Except their mode of delivery was exclusively oral or topical...

Permalink to Comment

10. Bernard Munos on June 23, 2014 11:33 AM writes...

I think the approach taken by Dassault will see its first successful application in helping understand the biology of single-cell pathogens, and suggest high-value targets that can be used in the quest for novel antibiotics. Knowledge gaps are still a major problem. Can't model, and much less predict, the behavior of a biological system, as long as a major part of its molecular machinery is poorly understood. Perhaps this is where open-source can do some good, in helping scientists band together to eliminate the knowledge gaps, and get us in a position to do serious modeling. We're not quite there yet, and even further from modeling multicellular systems, let alone modeling individual patient response.

But I would not dismiss the approach as pie-in-the-sky. It is a better way to do research. It's another way to say that biology is not a predictive science today, but when we know most everything, and can make it predictive, things like Moore's Law will become more relevant to drug R&D, and innovation on demand and the end of the elusive blockbuster will be in sight. Not around the corner, but this is where public research funding perhaps should take us.

Permalink to Comment

11. Anonymous on June 23, 2014 1:50 PM writes...

I think of it as a parallel to self diagnosis with webmd or such-

It certainly can work and there's been plenty of doctors that admit using webmd (hopefully just as an initial step- like wikipedia is to research), but whether it can be good enough with just the non-judgement metrics/biomarkers to allow them to be treated as an actual doctor prescription for getting medical support/drugs...

And of course how to deal with its false positives and false negatives.

Permalink to Comment

12. Morten G on June 24, 2014 5:38 AM writes...

The Protein DataBase is a pretty good example of how obligate open data deposition can enable a lot more research.
This though... it sounds like PR speak so I have no idea if they know what they are doing. If the PR speak was accurate then they should be easily able to model something like Craig Venter's minimal synthetic bacteria with only 400 protein genes. And I haven't heard that that has been achieved.

Permalink to Comment

13. OldLabRat on June 24, 2014 8:06 AM writes...

As some one who works on the front lines of using modeling in drug discovery, I can confidently predict this is way off in the future. I agree with Bernard M. that antibiotics will likely be the first productive area. The only problem is the rate of resistance development is unknown.

As an example of how complex this will be, think about predicting nuclear hormone receptor functional activity based on ligand binding affinity to the receptor. Regardless of the modeling approach used (computational, human, or hybrid), I've yet to see prospective prediction of functional activity based on binding affinity. I'm happy to acknowledge examples if I'm wrong. I'd love to have enough data to do such modeling; at this point it doesn't exist. Thanks for reading a long post.

Permalink to Comment

14. MIMD on June 26, 2014 3:30 PM writes...

The Syndrome of Inappropriate Overconfidence in Computing knows no bounds.

Permalink to Comment

15. Anon on June 26, 2014 4:42 PM writes...

@MIMD
"The Syndrome of Inappropriate Overconfidence in Computing knows no bounds."

Well said.


Bioinformatics hasn't paid off, and Google can't even deliver on a basic promise which they were very confident they could do (Read Derek's writeup "Google's Big Data Flu Flop")...yet people are still out there throwing this around like it's a sure bet.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
The Worst Seminar
Conference in Basel
Messed-Up Clinical Studies: A First-Hand Report
Pharma and Ebola
Lilly Steps In for AstraZeneca's Secretase Inhibitor
Update on Alnylam (And the Direction of Things to Come)
There Must Have Been Multiple Chances to Catch This
Weirdly, Tramadol Is Not a Natural Product After All