Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Another Critical Shortage | Main | Live By The Bricks, Die By The Bricks »

June 12, 2012

Predicting Toxicology

Email This Entry

Posted by Derek

One of the major worries during a clinical trial is toxicity, naturally. There are thousands of reasons a compound might cause problem, and you can be sure that we don't have a good handle on most of them. We screen for what we know about (such as hERG channels for cardiovascular trouble), and we watch closely for signs of everything else. But when slow-building low-incidence toxicity takes your compound out late in the clinic, it's always very painful indeed.

Anything that helps to clarify that part of the business is big news, and potentially worth a lot. But advanced in clinical toxicology come on very slowly, because the only thing worse than not knowing what you'll find is thinking that you know and being wrong. A new paper in Nature highlights just this problem. The authors have a structural-similarity algorithm to try to test new compounds against known toxicities in previously tested compounds, which (as you can imagine) is an approach that's been tried in many different forms over the years. So how does this one fare?

To test their computational approach, Lounkine et al. used it to estimate the binding affinities of a comprehensive set of 656 approved drugs for 73 biological targets. They identified 1,644 possible drug–target interactions, of which 403 were already recorded in ChEMBL, a publicly available database of biologically active compounds. However, because the authors had used this database as a training set for their model, these predictions were not really indicative of the model's effectiveness, and so were not considered further.

A further 348 of the remaining 1,241 predictions were found in other databases (which the authors hadn't used as training sets), leaving 893 predictions, 694 of which were then tested experimentally. The authors found that 151 of these predicted drug–target interactions were genuine. So, of the 1,241 predictions not in ChEMBL, 499 were true; 543 were false; and 199 remain to be tested. Many of the newly discovered drug–target interactions would not have been predicted using conventional computational methods that calculate the strength of drug–target binding interactions based on the structures of the ligand and of the target's binding site.

Now, some of their predictions have turned out to be surprising and accurate. Their technique identified chlorotrianisene, for example, as a COX-1 inhibitor, and tests show that it seems to be, which wasn't known at all. The classic antihistamine diphenhydramine turns out to be active at the serotonin transporter. It's also interesting to see what known drugs light up the side effect assays the worst. Looking at their figures, it would seem that the topical antiseptic chlorhexidine (a membrane disruptor) is active all over the place. Another guanidine-containing compound, tegaserod, is also high up the list. Other promiscuous compounds are the old antipsychotic fluspirilene and the semisynthetic antibiotic rifaximin. (That last one illustrates one of the problems with this approach, which the authors take care to point out: toxicity depends on exposure. The dose makes the poison, and all that. Rifaximin is very poorly absorbed, and it would take very unusual dosing, like with a power drill, to get it to hit targets in a place like the central nervous system, even if this technique flags them).

The biggest problem with this whole approach is also highlighted by the authors, to their credit. You can see from those figures above that about half of the potentially toxic interactions it finds aren't real, and you can be sure that there are plenty of false negatives, too. So this is nowhere near ready to replace real-world testing; nothing is. But where it could be useful is in pointing out things to test with real-world assays, activities that you probably hadn't considered at all.

But the downside of that is that you could end up chasing meaningless stuff that does nothing but put the fear into you and delays your compound's development, too. That split, "stupid delay versus crucial red flag", is at the heart of clinical toxicology, and is the reason it's so hard to make solid progress in this area. So much is riding on these decisions: you could walk away from a compound, never developing one that would go on to clear billions of dollars and help untold numbers of patients. Or you could green-light something that would go on to chew up hundreds of millions of dollars of development costs (and even more in opportunity costs, considering what you could have been working on instead), or even worse, one that makes it onto the market and has to be withdrawn in a blizzard of lawsuits. It brings on a cautious attitude.

Comments (21) + TrackBacks (0) | Category: Drug Development | In Silico | Toxicology


COMMENTS

1. Curious Wavefundrug makersction on June 12, 2012 8:39 AM writes...

"But the downside of that is that you could end up chasing meaningless stuff that does nothing but put the fear into you and delays your compound's development, too."

Exactly. Hopefully the FDA won't make this their weapon of choice, and I can see them potentially creating all kinds of trouble (some justified and some not so much) for drug makers, asking them if they tested their drug against this or that target.

Permalink to Comment

2. Rick Wobbe on June 12, 2012 9:07 AM writes...

It's one thing to interpolate between points in well-studied chemical space, where you can beat random guessing with practice. But predicting behavior outside that space will tell you more about the limitations of your model than the properties of the compound. That's the Turing test of these models. Thus far, the models seem more like economic theories, which, to quote the late, great Paul Sammuelson, "predicted 9 of the last 5 recessions". Brings back fond memories of the days when you could amaze and bemuse people by using dice to "predict" the PK or tox properties of compounds as well as many in silico models. If, as Curious Wavefunction worries, the FDA decided to make this a weapon of choice, I'm going to dust off my magic dice!

Permalink to Comment

3. OldLabRat on June 12, 2012 9:39 AM writes...

The EPA announced this week that it will start doing the necessary research to implement computational toxicology. The press release says all the right things, but I'm not optimistic that the science will win. I'm sure the FDA will be watching closely.

Permalink to Comment

4. watcher on June 12, 2012 9:50 AM writes...

Another company that has spent a lot of time on this topic has a similar predictive set of activities. When compounds look good, the go ahead---careful green light.

When compounds look questionable, they typically go ahead as the predictions with test sets are not reliable enough to give teams enough confidence for definitive decision making after so much time & resource investment, particularly for new compound classes not included in any training sets....cautious green light.

And so, little changes after big effort, time, and expenditure to take-on predicitive tox.

Permalink to Comment

5. anon on June 12, 2012 10:13 AM writes...

I thought it was already known that diphenhydramine has some activity at the serotonin transporter. This from wiki "In the 1960s, diphenhydramine was found to inhibit reuptake of the neurotransmitter serotonin.[30] This discovery led to a search for viable antidepressants with similar structures and fewer side-effects, culminating in the invention of fluoxetine (Prozac), a selective serotonin reuptake inhibitor (SSRI).[30][31] A similar search had previously led to the synthesis of the first SSRI, zimelidine, from brompheniramine, also an antihistamine."

Permalink to Comment

6. Pete on June 12, 2012 10:13 AM writes...

One needs to be wary of claims of polypharmacology based on IC50 30% inhibition at 10μM). Not sure why they don't use something a bit closer to likely physiological free levels. Tyrosine kinase inhibition assays tend to be run at different ATP concentrations but ATP-competitive inhibitors all need to deal with the same intracellular ATP concentration.

Permalink to Comment

7. Anonymous on June 12, 2012 10:23 AM writes...

My comment (#6) got mangled so I'm attempting to re-post. Apologies if the problem is at my end.

One needs to be wary of claims of polypharmacology based on IC50 30% inhibition at 10μM. Not sure why they don't use something a bit closer to likely physiological free levels. Tyrosine kinase inhibition assays tend to be run at different ATP concentrations but ATP-competitive inhibitors all need to deal with the same intracellular ATP concentration.

Permalink to Comment

8. weirdo on June 12, 2012 10:26 AM writes...

Given the number of papers on this very topic over the last few years, I'm very surprised to see this in Nature.

I also question the premise a bit -- the authors' own introductory paragraph points out that metabolites are often the problem. We do a lot of cross-reactivity testing on the API in the early stages of lead optimization; usually much less so on metabolites. A far more interesting paper would have dealt with in silico work on known metabolites, and tracking those down.

Permalink to Comment

9. Pete on June 12, 2012 10:30 AM writes...

Comment is still getting mangled so I've written micromolar explicitly in case Greek characters were causing the problem. Once again, apologies if the problem is at my end.

One needs to be wary of claims of polypharmacology based on IC50

One frequently-cited article even uses >30% inhibition at 10 micromolar.

Not sure why they don't use something a bit closer to likely physiological free levels.

Tyrosine kinase inhibition assays tend to be run at different ATP concentrations but ATP-competitive inhibitors all need to deal with the same intracellular ATP concentration.

Permalink to Comment

10. Anonymous on June 12, 2012 10:33 AM writes...

Comment is still getting mangled so I've written micromolar explicitly in case Greek characters were causing the problem. Once again, apologies if the problem is at my end.

One needs to be wary of claims of polypharmacology based on IC50 less than 30 micromolar.

One frequently-cited article even uses greater than 30% inhibition at 10 micromolar.

Not sure why they don't use something a bit closer to likely physiological free levels.

Tyrosine kinase inhibition assays tend to be run at different ATP concentrations but ATP-competitive inhibitors all need to deal with the same intracellular ATP concentration.

Permalink to Comment

11. MolecularGeek on June 12, 2012 10:33 AM writes...

The horse has already left the barn on the use of predictive models for regulatory activities. REACH in the EU is predicated on using QSAR and related technologies to triage industrial compounds for laboratory risk assessment. They have, however, also promulgated a set of best practices in model development that includes guidance on issues like interpolation vs. extrapolation and applicability domains.

Permalink to Comment

12. FDA lurker on June 12, 2012 10:57 AM writes...

@ #1:
As a FDA employee, I personally believe that industry kills too many potential drugs too quickly based on early preclinical results (even more potential drugs are probably killed before they get through the door). Often before human ADME is investigated. There are lots of reasons for this, and most are not scientific, IMO.

Permalink to Comment

13. partial agonist on June 12, 2012 12:46 PM writes...

Predictions based on two-dimensional structural similarity are always going to have huge error bars, given all of the examples where two enantiomers or two diastereomers have vastly different toxicity profiles.

This is a favorite area where the PETA people love to say that animal testing is not needed. Unfortunately, the animal toxicity (and then the human toxicity) is not something a computer is going to tell you much about, with high confidence. That is unless we get one of those computers from Star Trek or from the cheesy Batman TV show.

Permalink to Comment

14. barry on June 12, 2012 12:48 PM writes...

" That split, 'stupid delay versus crucial red flag', is at the heart of clinical toxicology"

That split of course is at the heart of the current $billion price for a new drug. Everyone wants a tool that will kill a loser earlier but no one wants his project killed. So we build and fund the new tools and the new departments, but we don't trust them enough to kill the projects early.

Permalink to Comment

15. DCRogers on June 12, 2012 1:42 PM writes...

It's beside the point if the predictions are good: let's even say they're perfect. How would you use them? From the paper:

"[T]he 656 drugs considered here each modulated an average of seven safety targets, sometimes across several classes, and more than 10% of the drugs acted on nearly half (45%) of the 73 targets."

So these aren't even close to filters commonly used to detect reactive substructures, where even one is a killer. Here, the "red flags" are so common as to give a boy-who-cried-worlf numbness.

Permalink to Comment

16. dvizard on June 12, 2012 5:21 PM writes...

"REACH in the EU is predicated on using QSAR and related technologies to triage industrial compounds for laboratory risk assessment."

But ecotox is a whole different story in terms of regulatory impact than pharmacological toxicology is. Damaged ecosystems don't sue you, unlike patients might.

Permalink to Comment

17. Morten G on June 13, 2012 7:30 AM writes...

It is a very good paper though and the chlorotrianisene story is quite compelling. Recommended read.
From here of course they need to show whether they could retrospectively separate compounds that failed clinical trials because of adverse events from those that didn't.

Two bits of this paper that Derek didn't mention which are especially interesting to the medicinal chemists:
1. In fig 3 31% of the tested _approved_ drugs ding hERG in vitro. So ask yourself how many adverse events have been put down to hERG inhibition simply because it is a very promiscuous target and how many compounds were unjustly killed because they dinged the hERG assay.
2. SEA could be used to jump scaffolds. Take a library of a couple of millions compounds that you like (in theory at least - you probably haven't met them yet) and mash it against the current binders of your target. Buy/synthesize hits and take it to the biologists.

Permalink to Comment

18. Anonymous on June 14, 2012 2:41 PM writes...

they already published a similar paper in Nature:

http://www.bkslab.org/publications/keiser_2009.pdf

When did Nature start to come out so easiy? quite surprising

Permalink to Comment

19. Anonymous on June 17, 2012 10:54 AM writes...

Most of the newly identified targets are only new for those that doesn't have access to a database like BioPrint
I checked the results in the Nature 2009 paper with the BioPrint database and could confirm what they found was correct, however, the compounds investigated were also active on several other targets in the database that were apparently not picked up by their method
Still an interesting paper, though.

Permalink to Comment

20. Anonymous on June 17, 2012 10:54 AM writes...

Most of the newly identified targets are only new for those that doesn't have access to a database like BioPrint
I checked the results in the Nature 2009 paper with the BioPrint database and could confirm what they found was correct, however, the compounds investigated were also active on several other targets in the database that were apparently not picked up by their method
Still an interesting paper, though.

Permalink to Comment

21. Jonadab on June 19, 2012 5:03 PM writes...

> diphenhydramine turns out to be active at the serotonin transporter

That might go a long way toward explaining what happens when somebody takes an entire box of Benadryl. Isn't serotonin a neurotransmitter? [Checks.] Why yes, yes it is. That could definitely be relevant.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
The Worst Seminar
Conference in Basel
Messed-Up Clinical Studies: A First-Hand Report
Pharma and Ebola
Lilly Steps In for AstraZeneca's Secretase Inhibitor
Update on Alnylam (And the Direction of Things to Come)
There Must Have Been Multiple Chances to Catch This
Weirdly, Tramadol Is Not a Natural Product After All