About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« The Wyeth/Elan Insider Trading Case Resolves | Main | The NIH, Pfizer, and Senator Wyden »

March 26, 2013

Automated Med-Chem, At Last?

Email This Entry

Posted by Derek

I've written several times about flow chemistry here, and a new paper in J. Med. Chem. prompts me to return to the subject. This, though, is the next stage in flow chemistry - more like flow med-chem:

Here, we report the application of a flow technology platform integrating the key elements of structure–activity relationship (SAR) generation to the discovery of novel Abl kinase inhibitors. The platform utilizes flow chemistry for rapid in-line synthesis, automated purification, and analysis coupled with bioassay. The combination of activity prediction using Random-Forest regression with chemical space sampling algorithms allows the construction of an activity model that refines itself after every iteration of synthesis and biological result.

Now, this is the point at which people start to get either excited or fearful. (I sometimes have trouble telling the difference, myself). We're talking about the entire early-stage optimization cycle here, and the vision is of someone topping up a bunch of solvent reservoirs, hitting a button, and leaving for the weekend in the expectation of finding a nanomolar compound waiting on Monday. I'll bet you could sell that to AstraZeneca for some serious cash, and to be fair, they're not the only ones who would bite, given a sufficiently impressive demo and slide deck.

But how close to this Lab of the Future does this work get? Digging into the paper, we have this:

Initially, this approach mirrors that of a traditional hit-to-lead program, namely, hit generation activities via, for example, high-throughput screening (HTS), other screening approaches, or prior art review. From this, the virtual chemical space of target molecules is constructed that defines the boundaries of an SAR heat map. An initial activity model is then built using data available from a screening campaign or the literature against the defined biological target. This model is used to decide which analogue is made during each iteration of synthesis and testing, and the model is updated after each individual compound assay to incorporate the new data. Typically the coupled design, synthesis, and assay times are 1–2 h per iteration.

Among the key things that already have to be in place, though, are reliable chemistry (fit to generate a wide range of structures) and some clue about where to start. Those are not givens, but they're certainly not impossible barriers, either. In this case, the team (three UK groups) is looking for BCL-Abl inhibitors, a perfectly reasonable test bed. A look through the literature suggested coupling hinge-binding motifs to DFG-loop binders through an acetylene linker, as in Ariad's ponatinib. This, while not a strategy that will earn you a big raise, is not one that's going to get you fired, either. Virtual screening around the structure, followed by eyeballing by real humans, narrowed down some possibilities for new structures. Further possibilities were suggested by looking at PDB structures of homologous binding sites and seeing what sorts of things bound to them.

So already, what we're looking at is less Automatic Lead Discovery than Automatic Patent Busting. But there's a place for that, too. Ten DFG pieces were synthesized, in Sonogashira-couplable form, and 27 hinge-binding motifs with alkynes on them were readied on the other end. Then they pressed the button and went home for the weekend. Well, not quite. They set things up to try two different optimization routines, once the compounds were synthesized, run through a column, and through the assay (all in flow). One will be familiar to anyone who's been in the drug industry for more than about five minutes, because it's called "Chase Potency". The other one, "Most Active Under Sampled", tries to even out the distributions of reactants by favoring the ones that haven't been used as often. (These strategies can also be mixed). In each case, the model was seeded with binding constants of literature structures, to get things going.

The first run, which took about 30 hours, used the "Under Sampled" algorithm to spit out 22 new compounds (there were six chemistry failures) and a corresponding SAR heat map. Another run was done with "Chase Potency" in place, generating 14 more compounds. That was followed by a combined-strategy run, which cranked out 28 more compounds (with 13 failures in synthesis). Overall, there were 90 loops through the process, producing 64 new products. The best of these were nanomolar or below.

But shouldn't they have been? The deck already has to be stacked to some degree for this technique to work at all in the present stage of development. Getting potent inhibitors from these sorts of starting points isn't impressive by itself. I think the main advantage to this is the time needed to generated the compound and the assay data. Having the synthesis, purification, and assay platform all right next to each other, with compound being pumped right from one to the other, is a much tighter loop than the usual drug discovery organization runs. The usual, if you haven't experienced it, is more like "Run the reaction. Work up the reaction. Run it through a column (or have the purification group run it through a column for you). Get your fractions. Evaporate them. Check the compound by LC/MS and NMR. Code it into the system and get it into a vial. Send it over to the assay folks for the weekly run. Wait a couple of days for the batch of data to be processed. Repeat."

The science-fictional extension of this is when we move to a wider variety of possible chemistries, and perhaps incorporate the modeling/docking into the loop as well, when it's trustworthy enough to do so. Now that would be something to see. You come back in a few days and find that the machine has unexpectedly veered off into photochemical 2+2 additions with a range of alkenes, because the Chase Potency module couldn't pass up a great cyclobutane hit that the modeling software predicted. And all while you were doing something else. And that something else, by this point, is. . .what, exactly? Food for thought.

Comments (16) + TrackBacks (0) | Category: Chemical News | Drug Assays | Drug Development


1. Henry's cat on March 26, 2013 12:00 PM writes...

Scientists gainfully researching the best way to make themselves obsolete; turkeys voting for Christmas. Sorry, Thanksgiving.

Permalink to Comment

2. nitrosonium on March 26, 2013 12:20 PM writes...

somewhere in China they are building...everything....uh i mean an expansive research park that will house hundreds of these "automated" systems. now all the outsourced med-chem jobs from will be US will be lost!!! OH NO!! gotta go. the drones are watching me now.

Permalink to Comment

3. stewie griffin on March 26, 2013 12:33 PM writes...

Reminds me of Kurt V's Player Piano

Permalink to Comment

4. anchor on March 26, 2013 12:49 PM writes...

..reminds me those wild West movies where everyone is shooting all to oblivion! What's next? Some kind of micro-fluidization park in China? Can't wait to hear.

Permalink to Comment

5. anon the ii on March 26, 2013 3:00 PM writes...

This is just an advertisement for Cyclofluidic. Nobody in his right mind would couple so many processes together where the failure of one brings the whole thing to a halt. And microfluidics for a variety of inputs is also a horrible idea. Clog...Dead. It's like all the worse lessons from combichem and HTS embraced and employed.

Permalink to Comment

6. Cytirps on March 26, 2013 3:18 PM writes...

Nice designed example but one should have realized that SAR direction cannot be limited by several easy reactions after the combichem era. It is not efficient if reaction time is longer than an hour. It tied down the LCMS.

Permalink to Comment

7. JC on March 26, 2013 3:23 PM writes...

I for one look forward to our Robot Synthesis Overlords.

Permalink to Comment

8. Stephen on March 26, 2013 7:35 PM writes...

Running the chemistry is easy. Automating it is even easier. Diluting it enough to prevent clogging isn't difficult. The real difficulty is in the purification (as Cytirps mentioned). If your products were roughly the same polarity then you could more readily use the same HPLC method. But, if you really want to crank through some substrate diversity you'll have a wide variety of LC methods, or maybe a Vici colt revolver chamber of LC columns.
And this is assuming you're running a methodology that doesn't require an extractive work-up.

Permalink to Comment

9. Anonymous on March 26, 2013 9:20 PM writes...

Should definitely sell this to Pfizer! This is a prefect fit for their brilliant med chem "designers" strategy. Here you go: designers and robots to invent new drugs...

Permalink to Comment

10. petros on March 27, 2013 3:09 AM writes...

This work was done in collaboration with and funded by Pfizer. Several of the authors are (ex)Pfizer employees

Permalink to Comment

11. London Chemist on March 27, 2013 7:10 AM writes...

#9 Anonymous

At least half the names on that paper are ex-Pfizer (Sandwich)....

Permalink to Comment

12. Beentheredonethat on March 27, 2013 8:37 AM writes...

Makes me feel kind of glad I'm out of the rat race. Has the same sort of whiff I smelt when combinatorial chemistry first reared its ugly head. Good luck to all still in the industry!!

Permalink to Comment

13. anon the II on March 27, 2013 8:40 AM writes...

@8 Stephen

You drastically underestimate the clogging problem. The precipitation problem IS the purification problem. You're still young. After you've tried automating enough different chemistries, you'll see it differently.

Permalink to Comment

14. The Count on March 27, 2013 3:07 PM writes...

Always pleased to see that the snake oil salesmen are still in business. As usual a loaded deck as we've chosen the framework and the starting materials at the beginning. We could have made all of these compounds in a plate and purified them in a single day in the combichem era. The only difference I see is that they're not waiting for their weekly screening slot -that triumph of dumb pharma and the pursuit of group efficiency.

Permalink to Comment

15. zDNA on March 28, 2013 10:09 AM writes...

This looks like a fabulous way to maximize potency while minimizing drug-like properties. Haven't we had enough of this already without building a Rube Goldberg apparatus to churn out more useless crap? (@5 anon the ii)

The authors are the cardinals in pectore of the Church of Potency. If only they knew.

Permalink to Comment

16. Stephen on March 29, 2013 12:14 AM writes...

@13 anon the II

You are right. Clogging is the purification problem. I was only referring to the relative concentrations within the reactor itself. Of course, the only solution to that problem is to fit one's chemistry into a narrowly defined box that doesn't do a damn thing at proving the utility of the method.

On the other hand, using microfluidics as as a tool for understanding process space is much more useful. And exciting.

Permalink to Comment


Remember Me?


Email this entry to:

Your email address:

Message (optional):

The Last Post
The GSK Layoffs Continue, By Proxy
The Move is Nigh
Another Alzheimer's IPO
Cutbacks at C&E News
Sanofi Pays to Get Back Into Oncology
An Irresponsible Statement About Curing Cancer
Oliver Sacks on Turning Back to Chemistry