Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« Professors Patent Pathways and Possibly Profit? Please. | Main | Proxies and Politics Again »

June 19, 2009

More Hot Air From Me on Screening

Email This Entry

Posted by Derek

After yesterday's post on pathway patents, I figured that I should talk about high-throughput screening in academia. I realize that there are some serious endeavors going on, some of them staffed by ex-industry people. So I don't mean to come across as thinking that academic screening is useless, because it certainly isn't.

What is probably is useless for is enabling a hugely broad patent application like the one Ariad licensed. But the problem with screening for such cases isn't that the effort would come from academic researchers, because industry couldn't do it, either: Merck, Pfizer, GSK and Novartis working together probably couldn't have sufficiently enabled that Ariad patent; it's a monster.

It's true that the compound collections available to all but the very largest academic efforts don't compare in size to what's out there in the drug companies. My point yesterday was that since we can screen those big collections and still come up empty against unusual new targets (again and again), that smaller compound sets are probably at even more of a disadvantage. Chemical space is very, very large. The total number of tractable compounds ever made (so far) is still not a sufficiently large screening collection for some targets. That's been an unpleasant lesson to learn, but I think that it's the truth.

That said, I'm going to start sounding like the pointy-haired boss from Dilbert and say "Screen smarter, not harder". I think that fragment-based approaches are one example of this. Much smaller collections can yield real starting points if you look at the hits in terms of ligand efficiency and let them lead you into new chemical spaces. I think that this is a better use of time, in many cases, than the diversity-oriented synthesis approach, which (as I understand it) tries to fill in those new spaces first and screen second. I don't mind some of the DOS work, because some of it's interesting chemistry, and hey, new molecules are new molecules. But we could all make new molecules for the rest of our lives and still not color in much of the map. Screening collections should be made interesting and diverse, but you have to do a cost/benefit analysis of your approach to that.

I'm more than willing to be proven wrong about this, but I keep thinking that brute force is not going to be the answer to getting hits against the kinds of targets that we're having to think about these days - enzyme classes that haven't yielded anything yet, protein-protein interactions, protein-nucleic acid interactions, and other squirrely stuff. If the modelers can help with these things, then great (although as I understand it, they generally can have a rough time with the DNA and RNA targets). If the solution is to work up from fragments, cranking out the X-ray and NMR structural data as the molecules get larger, then that's fine, too. And if it means that chemists just need to turn around and generate fast targeted libraries around the few real hits that emerge, a more selective use of brute force, then I have no problem with that, either. We're going to need all the help we can get.

Comments (25) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Assays | Drug Development


COMMENTS

1. Curious Wavefunction on June 19, 2009 9:13 AM writes...

New scaffolds should be introduced in screening libraries which are currently missing. Similarity searching methods can shed light on such scaffolds.

http://ashutoshchemist.blogspot.com/2009/06/anti-question-or-when-bias-can-be-good.html

Permalink to Comment

2. HelicalZz on June 19, 2009 9:20 AM writes...

Just to throw in a curveball, the consideration of pathway analysis and elucidation via screening and identification of inhibitors is itself a bit of an industry-centric mindset. One of the huge advantages of RNAi (the technology, not as a therapeutic) continues to be pathway analysis and elucidation. Knocking down protein products has advantages over knock-out technologies, especially with targets essential to development.

So, pathway inhibition via chemical means is by no means a necessity for pathway elucidation or association with a disease indication. Don't assume that screening is necessary to serve the interests of academic research or necessarily to obtain and demonstrate utility for a patent application.

In other words, this is likely to be a bigger issue in the coming years.

Permalink to Comment

3. big btech on June 19, 2009 10:37 AM writes...

Real trouble is biotechs and big pharma are continuing to look for new drugs where they've already been. That is, in the well-defined (and retrospective) space defined by the stultifying BS of Lipinsky's rules. As it turns out, continuing to look where you've been, that is, x # of heteroatoms, MW

I don't buy the BS put forth a few years back about "10^63" different molecules, but until people start to THINK, rather tnan repeat, we'll be left buying follow on products for GERD (didn't that used to be indigestion?) and making up diseases like "restelss leg syndrome" and ADHD (this used to be cured by having your ass kicked in the school yard, not via meth).

There is a happy medium, I hope.

Permalink to Comment

4. NewbieAlert on June 19, 2009 11:23 AM writes...

I thought that, historically, most of our drugs came from natural product leads. Why have we abandoned this technique? (I ask out of ignorance, not out of agenda.)

Permalink to Comment

5. mad on June 19, 2009 11:27 AM writes...

Here is another curve ball.

Are we really screening what we thing we are screening. What about all the problems with library storage and preserving the integrity of the compounds. DMSO storage turned out to be not the "file and forget" preservative it was treated as.

How many targets were missed due to screening partially degraded compounds?

Permalink to Comment

6. Cellbio on June 19, 2009 12:03 PM writes...

I've seen the output of some academic screens, and what strikes me is that the technical solutions are being made, libraries, liquid handling etc, but the judgement is lacking. After the screen is done, some (most?) of the academics take the compounds that sit on top of a potency ranking as "success". So, when Derek says, 'we can screen and come up empty', that is because we apply reasonable judgement and have other data/insight about the compounds that aloows us to trash the whole output as oppossed to filing patents and conducting biological research with mM concentrations of structurally complicated salts or detergents. Not so much the case in academia today.

Permalink to Comment

7. Lucifer on June 19, 2009 12:20 PM writes...

Has this approach produced any path-breaking drugs? You know, the ones that have a therapeutic effect and are somewhat superior to the previously used drugs?

The Devil is in the details..

//It's true that the compound collections available to all but the very largest academic efforts don't compare in size to what's out there in the drug companies.//

Permalink to Comment

8. JAB on June 19, 2009 12:24 PM writes...

@4. I resemble that remark ! Nat prods have been dropped in pharma partly due to cost and time lines of resupply once one had a purified NP in hand. I would submit that great strides have been made in biosynthesis and total synthesis in the last several years and that the worth of NPs ought to be on the rise. Note that Novartis still has an active NP program, but that Wyeth's acquisition by Pfizer is likely to completely extinguish US pharma NP groups. I predict that boutique NP specialist companies could fill the gap if someone sees the opportunity.

Permalink to Comment

9. Anonymous in NC on June 19, 2009 1:43 PM writes...

Comment 5 by mad begins to raise a major issue. Analysis of the sources of error in screening are few. The DMSO stability question is one, real quantitation due to solubility is another, structure assignment and sample picking errors help to round the first round choices. While contemporary assay development uses statistics like Z factor, there are other substantial sources of error. Couple this with low frequency of hits and you have a major failure mode of HTS. What % of hits present in library are actually found in HTS or qHTS? Are there implications for the perceived higher success of focussed (directed) libraries? Given all the requirements a molecule must meet to grow up to be a drug, is relevant diversity space really that big?

Permalink to Comment

10. JB on June 19, 2009 1:54 PM writes...

I help run one of the large academic screening centers- We've thought about some of the issues people raise here. We do a routine QC on every compound that enters screening, we're generating new scaffolds that are more natural-product like, we have medicinal chemists who decide when something is garbage and not worth spending time on.

Permalink to Comment

11. Paul S on June 19, 2009 2:53 PM writes...

Back in graduate school, I had an idea for a screening method that probably couldn't have worked in the mid 1980s, but perhaps can today. It's based on pattern-recognition databases applied to NMR (both H1 and C13) data for compounds of interest.

The method would essentially take a black-box approach to compound activity. That is, it wouldn't assume any understanding of why a given compound is active, except to assume that the reason has something to do with conformational or structural effects that can be detected in NMR. It would begin by building a dataspace of NMR data on compounds of known activity at a particular receptor, perhaps including both the compounds themselves and the compounds bound to the active site(s). Then you'd turn an algorithm loose on your database and compare it with similar data for compounds of unknown activity.

Pattern recognition has been applied to chemical data more and more often as time goes on - for example to link a mineral sample to a specific mineral deposit based on crystallographic and trace element analysis. I think it would be a fast method for initial screening of compounds of interest, at least screening out compounds of probable low activity prior to in vivo (or even in vitro) screening methods that are much more labor intensive.

Do you know of any work in this area?

Permalink to Comment

12. Sili on June 19, 2009 2:57 PM writes...

Anything that calls for more chrystallography is fine by me! Where do I sign up?

Is there really no 'screening' in place in compound libraries? I wouldn't have thought it would be that hard to random sampling and check NMR against spectrum filed when the stuff was registered.

Of course, once a 'miss' is found, someone would have to look at what the problem is - I can see how that would be a bottleneck.

Permalink to Comment

13. Ty on June 19, 2009 3:20 PM writes...

Academic efforts in the drug discovery area is understandably rudimentary at this point. But, after many lessons learned in a hard way, it's improving, I guess, and it will, thanks in part to the influx of many of the seasoned drug hunters who lost industry jobs in recent years. Having said that, from my observation, the real problem lies in the head of the PIs. More often than not, their goal is not to make a breakthough medicne but to publish and get a grant renewed. They are mostly ignorant of and try to bypass the toughest part of drug discovery - like target ID, PK and exposure, tox issues, etc.

Combichem frenzy has generated a huge amount of trash which still is negatively impacting many screening efforts and thereof, esp. in academia where the primary collection of small molecules tends to be cheap commercial libraries. I am afraid now that the immature drug discovery drive in the academia might generate quite a number of 'expensive' trash in the years to come. Elesclomol, anyone?

Regarding natural products, really how many drugs are derived from NPs in a way NPs substituting compound collection for hit finding? In this context, we should not count the mimics of physiological ligands such as monoamines, peptoids, steroids, etc. Other than infectious disease area and cancer where nature had to have generated a wealthy pool of bug-thwarting and cytotoxic agents, I don't really see too many drugs that were inspired by natural products. Inside the industry, yes, NPs are underestimated for many reasons but, outside of it, its mystique is kinda overblown, I think.

Permalink to Comment

14. cyclcc on June 19, 2009 4:01 PM writes...

Is the point not so much to completly cover chemical space but rather to start accessing new chemical space and in this way you have a chance of accessing new biological space. You could argue that Pharma has looked mostly at the same targets (GPCR's) with the same chemistry and any time that new biology targets have been investigated its the same "old" chemistry thats used to access it. When this doesn't work the target is called "undrugable" ...... new chemistry will be needed for the targets of the future if pharma is to survive.

Permalink to Comment

15. hibob on June 19, 2009 4:07 PM writes...

@13
"More often than not, their goal is not to make a breakthough medicne but to publish and get a grant renewed. They are mostly ignorant of and try to bypass the toughest part of drug discovery - like target ID, PK and exposure, tox issues, etc."

I think it's best when PIs don't try to make a breakthrough medicine - they're much better off trying to identify a pathway and finding compounds that work as a proof of principle rather than trying to think twelves steps ahead to a real drug. If they think they're on to something, by all means start a company or license it, but telling all their grad students they will be dropping their projects to become a pipeline (and no publishing of the results for a coupla years, sorry) wouldn't work very well. So yeah, they should work on finding the target and the big ugly tarballs that hit it, and publish.

Permalink to Comment

16. Norepi on June 20, 2009 4:33 AM writes...

Of course, another problem everyone has in screening are the false positives/false negatives, especially if everything isn't automated, double if the assay is cell-based. One of our cellular assays has two phases, an in initial high concentration and then more detailed screening if the compounds pass a certain "potency threshold" - we get compounds all the time that look fantastic initially, and then maybe shape up to be 10 uM max. What happened, did they decompose? And the variability, one compound with N carbons causes the cells to grow, and N+1 kills everything in sight...How many potential drugs have we chucked out on account of this sort of business?

Derek, I find it interesting that you mention the difficulties in modeling DNA/RNA. I'm not terrifically experienced, but as someone who did this, yes, it is a pain, at least when you're working with intercalating compounds, because a) some programs, especially newer ones, just aren't parameterized right or completely for DNA, b) I think programs have a hard time dealing with the higher quantum factors involved (pi-stacking, hyperconjugative effects), and trying to sort this out ab initio is just time consuming. So the end result is usually lousy inhibitors docking the same as good ones; it's not predictive at best and downright wrong at worst.

Permalink to Comment

17. LeeH on June 20, 2009 9:37 AM writes...

A few comments:

Using Lipinski rules is just a case of not repeating history. Chris Lipinski didn't really invent these rules - the human body did. The probability of having a successful drug is very reduced if you fall outside these ranges because you have a high probability of violating some PK or physicochemical limitation. Conversely, if you are inside the ranges that doesn't mean you have a drug.

Concerning actually having what you think you have in a compound collection, perhaps I was just lucky, but where I used to work we were almost never blindsided by the identity of a compound after retesting HTS hits. It was almost always what we thought it was, with the exception of an instance where compounds from a particular vendor were uniformly incorrect. Of course, I can't vouch for the hundreds of hits that we didn't follow up on, but I suspect that it wasn't that we just did everything right, but that by and large these days most collections are fairly clean.

On the natural products issue, it's really a religious discussion rather than a technical one. On the one hand, NPs can give you very novel shapes, and have clearly been a major historical source of drugs. On the other hand, how many companies want to get into the trench warfare of fixing ADME/PK issues on a compound with multiple chiral centers and synthetically equivalent functional groups (DOS notwithstanding)?

Regarding the vastness of chemical space, yes, it's vast, but I think the bigger issue now is finding compounds that are specific rather than those that are active. We try to find specific kinase inhibitors using small molecules that bind to almost identical binding sites. The body controls this specificity using the interaction of proteins where the action occurs well outside the binding site. It's a miracle we have any kinase-based drugs at all.

Permalink to Comment

18. Lucifer on June 20, 2009 1:55 PM writes...

Anti-microbial drugs?

Anti-microbial drugs, vaccines and sanitation have increased life expectancy more than all other medical advances combined.

//Using Lipinski rules is just a case of not repeating history. Chris Lipinski didn't really invent these rules - the human body did. The probability of having a successful drug is very reduced if you fall outside these ranges because you have a high probability of violating some PK or physicochemical limitation. Conversely, if you are inside the ranges that doesn't mean you have a drug.//

Permalink to Comment

19. Anonymous on June 21, 2009 11:27 AM writes...

of course synthesizing compounds of the complexity found in natural product is the place where new drugs will be found. The problem is pharma would rather spend it's money on TV ads, private jets, and executive compensation, rather than on long drawn out synthetic projects that may or may not be of value. It will never happen. The real issue at hand is that "expensive" medical care, (drugs, specialists, procedures) is going to go the way of the dinosaur, it's impossible for it to continue.

Permalink to Comment

20. seenthelight on June 21, 2009 11:30 AM writes...

of course synthesizing compounds of the complexity found in natural product is the place where new drugs will be found. The problem is pharma would rather spend it's money on TV ads, private jets, and executive compensation, rather than on long drawn out synthetic projects that may or may not be of value. It will never happen. The real issue at hand is that "expensive" medical care, (drugs, specialists, procedures) is going to go the way of the dinosaur, it's impossible for it to continue.

PS
99.9999% of the compound libraries are nothing but junk, easy to make, combichem derived crap. A few "degradation products" in the mix doesn't preclude finding a hit.

Permalink to Comment

21. NP_chemist on June 22, 2009 1:41 PM writes...

Well, #13 seems to have forgotten the best selling drug of all-time, atorvastatin, which is effectively the warhead from the original compactin with "different grease". The first group to show that this type of substitution could be successful was in fact in a paper by Merck well before Warner Lambert chemists "invented" atorvastatin.

Permalink to Comment

22. drug_hunter on June 22, 2009 9:18 PM writes...

(1) My estimate is 10^24 - 10^30 plausible organic compounds including variants on all known natural product scaffolds. Perhaps 1 out of every 10^6 of these will have reasonable drug-like properties. Meanwhile the largest screening libraries are, what, 10^7 ?

(2) With new disease biology e.g. transcriptional regulation, disruption of protein-protein interactions, and so forth, we will need far more chemical diversity to make potent and selective drugs.

(3) Conclusion: there's still a lot of room for improvement in library construction and screening. Improvement both synthetically and computationally.

Permalink to Comment

23. srp on June 26, 2009 8:09 PM writes...

It is striking that everyone here is totally on board with trying to design drugs "rationally" by fitting molecules to specific receptors on known pathways. This commitment is so strong that alternatives are not even considered.

Yet we know (from many of Derek's posts, no less) that the actual mechanisms of actual working drugs often (usually?) depart significantly from the original theory by which they were developed and approved. In addition, most of the big successful drugs of the past (aspirin?) were not developed by targeting a single pathway and indeed operate in complex ways on multiple pathways.

Until the state of the art in biology and computer modeling gets way more advanced, I find it hard to believe that "rational" interventions into such complex systems are likely to have a high success rate. My impression is that the outcome of evolution is biological systems with all kinds of messy feedbacks and feedforwards that operate in variable ways depending on environmental conditions. Treating these systems like machines designed by engineers which (sometimes) give simple responses to simple interventions strikes me as dogmatic and unrealistic, but there seem to be institutional (e.g. the FDA) and cultural (e.g. individual education and experience) factors that mandate that approach today.

Permalink to Comment

24. Jose on June 26, 2009 8:19 PM writes...

Interesting to note that all the big pharmas have dropped their NP divisions, and all the small biotechs focused on NP platform development are now defunct.

Permalink to Comment

25. transmetallator on July 2, 2009 11:26 AM writes...

At a "biology heavy" institution we have a large HTS program focused on screens for new interesting biology. The compound library is supplemented by compounds from several chemistry labs doing tot. syn. work and isolation fractions from a natural product group. Guess where a ton of hits come from? The natty P mixtures have very high hit rates and even if they have no idea what they are doing, validation then gives new compounds that can be optimized. Does industry do this? If not, why not?

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
How Not to Do It: NMR Magnets
Allergan Escapes Valeant
Vytorin Actually Works
Fatalities at DuPont
The New York TImes on Drug Discovery
How Are Things at Princeton?
Phage-Derived Catalysts
Our Most Snorted-At Papers This Month. . .