Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« The Polypill Rides Again | Main | Travel »

April 3, 2009

The Mechanical Chemist?

Email This Entry

Posted by Derek

We use a lot of automated equipment in the drug discovery business. There’s an awful lot of grunt work involved, and in many cases a robot arm is better suited to the task – transferring solutions, especially repetitive transfers of large numbers of samples, is the classic example. High-throughput screening would just not be possible if you had to do it all by hand; my fingers hurt just imagining all the pipetting that would involve.

But I wouldn’t say that the process of medicinal chemistry is at all automated. That’s very much human-driven, and a lot of the compounds on most med-chem projects are made by hand, one at a time. Sure, there are parallel synthesis techniques, plates and resins and multichannel liquid handlers that will let you set up a whole array of reactions at once. But you do that, typically, only after you’ve found a hot compound, and that’s often done the old-fashioned way. (And, of course, there are a lot of reactions that just don’t lend themselves to efficient parallel synthesis).

But I remember the first time I saw an automated synthetic apparatus, back at an ACS meeting in the mid-1980s. There was a video in the presentation (a real rarity back then), and it showed this Zymark arm being run to set up an array of reactions, assay each of them after an overnight run, and report on the one that performed the best. “Holy cow”, I thought, “someone’s invented the mechanical grad student”. Being a grad student at the time, I wasn’t so sure what I thought about that.

This all comes to mind after reading a report over at Wired about a robotic system that has been claimed to have made a discovery without much human input at all. “Adam”, built at Aberystwyth University in Wales, seems to have been set up to look for similarities in yeast genes whose function hadn’t yet been assigned, and then (using a database of possible techniques) set up experiments to test the hypotheses thus generated. The system was also equipped to be able to follow up on its results, and eventually uncovered a new three-gene pathway, which findings were confirmed by hand.

And Ross King, leading the project at Aberystwyth, is apparently extending the idea to drug discovery. Using a system that (inevitably) will be called “Eve”, he plans to:

. . .autonomously design and screen drugs against malaria and schistosomiasis.

"Most drug discovery is already automated," says King, "but there's no intelligence — just brute force." King says Eve will use artificial intelligence to select which compounds to run, rather than just following a list.

Well, I won't take the intelligence comment personally; I know what the guy is trying to say. I’ll be very interested to see how this is going to be implemented, and how it will work out. (I'll get an e-mail off to Prof. King asking for some details). My first thought was that Eve will be slightly ahead of a couple of the less competent people I’ve seen over the course of my career. And if I can say that with a straight face (and now that I think about it, I believe that I can), then there may well be a place for this sort of thing. I’ve long held that jobs which can be done by machines really should be done by machines.

But how is this going to work? The first way I can see running a computational algorithm to design drugs would be some sort of QSAR, and we were just talking about that here the other day – most unfavorably. I can imagine, though, coding in a lot of received wisdom of drug discovery into an expert system – Topliss tree for aryl substituents, switch thiophene for phenyl, move nitrogens around the rings, add a para-fluoro, check both enantiomers, put in a morpholine for solubility, mess with the basicity of your amine nitrogens, no napthyls if you can help it, watch your logD - my med-chem readers will know just the sorts of things I mean.

Now, automating that, along with feedback from the primary and secondary assays, solubility, PK, metabolite ID and so on. . .mix it in with literature-searching capability for similar compounds, some sort of reaction feasibility scoring function, ability to order reagents from the stockroom, analyze the LC/MS and NMR traces versus predictions, weight the next round of analogs according to what the major unmet project goals are. . .well, we're getting to the mechanical medicinal chemist, sure enough. Now, not all of these things are doable right now. In fact. some of them are rather a long way off. But some of them could be done now, and the others, well, they're certainly not impossible.

I'm not planning on being replace any time soon. But the folks cranking out the parallel libraries, the methyl-ethyl-butyl-futile stuff, they might need to look over their shoulders a bit sooner. That's outsourcing if you like - from the US to China and India, and from there to the robots. . .

Comments (28) + TrackBacks (0) | Category: Drug Development | Drug Industry History | General Scientific News | Life in the Drug Labs


COMMENTS

1. Ed on April 3, 2009 9:45 AM writes...

I remember reading about something called DrugGuru (from Abbott IIRC), and something similar from JNJ. These were showcased as offering up solutions to many of the issues that you highlight Derek. Any insiders able to shed any light on how these perform?

Maybe we'll see a more German model coming to the US and UK, but the lab head will have a "magic" PC that supplies most of the solutions.

Permalink to Comment

2. Retread on April 3, 2009 10:33 AM writes...

Don't sweat it. You are all probably too young to remember the hype about artificial intelligence in the late 70s early 80s. LISP was the hot language, and machine intelligence and expert systems were going to put anyone with an IQ under 200 out of business. People (including MDs such as myself, who made money by figuring out what was wrong with people) were significantly worried. The Japanese devoted mucho yen to something called the Fifth Generation Computer System project in the 80s.

Not much happened. The MD hasn't been replaced (although the machines we use for diagnosis are incredibly better), nor has any other expert, to my knowledge.

Why not? The answer seems to be that the problems AI was supposed to solve are incredibly hard (like drug discovery).

One of the reasons drug discovery is so hard is that we have very incomplete information about the system we are trying to alter. As an example, have a look at the 20 February '09 Issue of Cell -- it's all about the functions of RNAs in the cell which don't directly code for protein -- the part transcribed from 'junk DNA'. What if some of the drugs you are making actually produce their effects on these RNAs rather than the proteins you think you are targeting? There is an excellent precedent for this. Bacteria use a variety of small molecules (B12, thiamine) to interact with RNAs (called riboswitches) to alter gene expression, and their complexity is far less than eukaryotes. Also since riboswitches have been found in some plants, can we be far behind?

Take heart, don't lose any sleep, and keep working. Our patients can always use better drugs.

Permalink to Comment

3. Bob Hawkins on April 3, 2009 11:10 AM writes...

Many years ago, I attended a talk by the spectroscopist Tomas Hirschfeld. He gave examples where exhaustive computer searches identified stuff in spectra. "It never would have occurred to me to look for these things, and some of these results are of interest. On the other hand, I am not going down in history as the first man to co-author a paper with an IBM 4341!"

Who will be the first to give a robot its due?

Permalink to Comment

4. Cloud on April 3, 2009 11:42 AM writes...

@Bob Hawkins- Tomas Hirschfield's actual coauthor should have been the person who wrote the software that powered the computer search. The algorithm is what made any discoveries, not the hardware.

I work in scientific informatics. My opinion on this sort of thing is that a lot of scientific data analysis is about 90% automatable, and that this part of the analysis is usually the drudgework that no one really misses doing. That last 10% really needs the scientist, and this is the fun part where you might actually learn something new and unexpected from the data.

I haven't gone and read the Wired article or looked for more info on this, but it sounds to me like this Welsh system is just removing some more of the drudgework of analyzing large pathway searching data sets. It has an algorithm that identifies likely "hits" and can go and test its predictions. Using this, a scientist could get a lot of the basic analyses done automatically, leaving him/her free to focus on the data that don't fit neatly into the algorithm. Because there are always data that don't fit neatly into your algorithm when you're dealing with biology, and those data are often where the really interesting things are buried.

Permalink to Comment

5. Nick K on April 3, 2009 11:48 AM writes...

Many, many years ago there was great excitment about computer programs like Corey's LHASA which were going to revolutionise the synthesis of complex natural products. What became of them? Does anyone still use them?

Permalink to Comment

6. Dan on April 3, 2009 12:03 PM writes...

I love the idea of telling a computer to set up a reaction without leaving my desk - and I would imagine that the mechanics of doing that would be reasonably easy to implement. But this isn't going to put medicinal chemists out of a job (you still would need someone to tell the computer what to do. Computer programmers don't even know enough chemistry to start writing a program that will prepare structures w/o any human input.

I remember an analogy back when deep blue was all the rage for beating Kasparov. The simpler the system - the more easily a computer can solve it. Chess, a game with ~50 possible moves and 20-40 steps to finish a game can be solved by a fast computer (at least enough to beat the best human competitor). Chemistry which is far more complex (and much less well understood) isn't going to succumb to brute force analysis any time soon. At best the computer could be a tool to remove some of the drudgery from benchwork.

Permalink to Comment

7. Bob on April 3, 2009 12:28 PM writes...

"That's outsourcing if you like - from the US to China and India, and from there to the robots." I think this could possibly lead to de-outsourcing (or re-insourcing?) - if RoboMeChem is cheap, small, and easy to use (all relative, of course), the places that have outsourced to places where labor is cheap may bring the work back in-house. Faster turn-around, and lots less chance of leakage of intellectual property.

Permalink to Comment

8. S Silverstein on April 3, 2009 1:52 PM writes...

These miracles are possible in the future, but beware the syndrome of inappropriate overconfidence in computing in the now.

Example:

Use of electronic health records (EHR) data (sloppy, uncontrolled, input by various specialists from RN's to med students to docs under pressured circumstances for varied reasons i.e., maximize reimbursement) is deemed by our new Sec'y of HHS as usable on a national scale for "comparative effectiveness research."

We can't even use EHR data to detect signals of severe, tangible adverse drug effects reliably, but through some cybernetic miracle we'll be using this data to compare two drugs for subtle issues related to outcomes.

Right...

Permalink to Comment

9. Bored on April 3, 2009 2:08 PM writes...

"We are the Borg. You will be assimilated, even you bench chemists."

Permalink to Comment

10. Hap on April 3, 2009 2:22 PM writes...

How are electronic health records likely to be worse than the current ones? I was under the impression that current records were not usually legible, easily, and sometimes difficultly accessible, and that having common formats would help make it better and less error-prone. (While past records would need to be input by random people, it would also seem to make current records easier because of the handwriting thing - doctors could input directly as now and so future records would be more easily stored and retrieved). I assume I am mistaken in these assumptions. How?

Permalink to Comment

11. Robin on April 3, 2009 2:47 PM writes...

This is off topic, but you may be interested to know that the Court of Appeals for the Federal Circuit released its ruling in the Ariad-Lilly case today. The panel invalidated all four claims that were found by the lower court to be infringed by Lilly. As noted in the opinions, this ruling is entirely consistent with the CAFC's previous ruling in the Rochester v. Searle case. No comment yet from either company.

Permalink to Comment

12. Ihopecarolinaloses on April 3, 2009 3:45 PM writes...

Derek,

Be sure ask Prof. King what his measure of success for 'Eve' will be? How many times have we witnessed a 10 micromolar hit be discovered, claim victory, and then never be heard from again.

Permalink to Comment

13. Lu on April 3, 2009 4:44 PM writes...

Cloud, can't agree more with you.

Our lab works on chemometrical analysis of spectra (among other things). We did some analysis using vector machines learning methods (next-generation neural network algorithms) and nobody had even a slightest doubt who gets the credit for the work :) Seriously, does anyone place an Eppendorf pipettor as a co-author on a paper?
Computer algorighms are just tools in the hands of scientists. Machines can automate routines and help people be more productive but they cannot come up with original ideas. The actual design of hypotethes and experiments will be on us. There always be a human inside a mechanical turk.

Over-interpretation of machine results is another problem. There is only so much information a human being can master and it's HARD to be an expert in chemistry and mathematics at the same time.

Permalink to Comment

14. Paul S on April 3, 2009 4:49 PM writes...

I foresee these systems being used mainly as assistants to medicinal chemists, at least in the near future. They could be used to discover drug targets, as Adam was, possibly to synthesize promising compounds, as Eve supposedly will be. Eventually though their results will be handed off to human scientists for evaluation and to suggest new research lines.

I believe the human scientist's role will become increasingly supervisory and managerial. I don't see this as a threat to professionals though - at least not for many years. The US is already experiencing a critical shortage of scientifically trained personnel. If all goes well then these systems will do no more than solve that shortage, and perhaps help us make up for the lack of science education in our schools.

At least, that's my hope.

Permalink to Comment

15. JC on April 3, 2009 4:55 PM writes...

"The US is already experiencing a critical shortage of scientifically trained personnel" - It is?

Permalink to Comment

16. barry on April 3, 2009 5:09 PM writes...

We are prone to periodic messianic fevers. There was Lhasa, and rational design, and combinatorial chemistry and massive parallel screening, each of which was going to render med.chem obsolete. When the fever cools, we're still at work, but with a new tool.
If I remember the sequence rightly, after "Adam" and "Eve" we should be on the lookout for "Cain"

Permalink to Comment

17. S Silverstein on April 3, 2009 5:23 PM writes...

How are electronic health records likely to be worse than the current ones?

Hap, my point was not about EMR vs. paper, it was that EMR's facilitate uses - and abuses - of data that were impractical with paper.

The issue of EMR difficulties is an entire other matter. Do a google search on "healthcare IT failure" for more on that.

Permalink to Comment

18. anon the II on April 3, 2009 6:05 PM writes...

What's Paul S smoking? I want some.

Actually, I'd prefer to have one of those jobs for which there is such a "critical shortage of scientifically trained personnel."

Permalink to Comment

19. Smiler on April 4, 2009 7:39 AM writes...

Ok, my RoboMechem is called 'Nieve'. Seriously read this post then read the earlier post on QSAR and activity cliffs.

Permalink to Comment

20. John D. on April 4, 2009 3:10 PM writes...

Disclaimer: I'm not a chemist.

What I wonder is, how to you program it to avoid accidentally creating "things Derek won't work with" and it all suddenly goes boom? Or, if not "boom," then you come back in the room to find a steaming hole in the concrete and the arm is still pouring things down into the hole....

Permalink to Comment

21. anon on April 4, 2009 7:54 PM writes...

"What I wonder is, how to you program it to avoid accidentally creating "things Derek won't work with" and it all suddenly goes boom? ."

Maybe this could solve the "outsourcing to slave-labor camps in China" problem; have them make all the perchlorates and azides.....

Permalink to Comment

22. Shane on April 5, 2009 3:17 AM writes...

Is this so called drugde work really so bad? I often have my best ideas and insights when I am collecting fractions or cleaning glassware.

Sure coming up with earth shattering hypotheses is fun, but could you keep it up 7 hours a day, 40 weeks a year, for year after year?

If this vision is realised wont this just result in two thirds of the worked becoming redundant, and one third being kept on to do an ever increasing workload under mounting mental duress?
And if the whole target/screening/combichem driven process is a dud anyway then how does this actually lead to useful clinical outcomes?

This whole premise relies on a misguided impression about what research actually involves and a false dichotomy between brain work and body work.

Permalink to Comment

23. zts on April 5, 2009 7:30 AM writes...

Shane said: "If this vision is realised wont this just result in two thirds of the worked becoming redundant, and one third being kept on to do an ever increasing workload under mounting mental duress?"

And if two thirds of the workforce is redundant, that is two thirds fewer brains thinking about the problems. Two thirds fewer people challenging the prejudices and dogmas of the projects. Lower quality science, lower quality compounds advancing, less new drugs. Same problem with so much outsourcing, or companies structured so the peons aren't allowed to think and can only blindly follow the leader.

Permalink to Comment

24. CMC guy on April 5, 2009 10:22 AM writes...

Sounds like another potential "tool for the toolbelt" of med chem that will be overhyped as a new paradigm (because it sounds good and should be cheaper) but will be harder to implement and applicable to limited cases rather than general utility. There have been many past phases that have reached various levels of help vs. hindrance. I don't think AI has yet achieved the insight and intuition that can duplicate the combined thinking of multiple disciplines and experience for complexity of drug design. Even in (original at least) Star Trek the Replicator required some human guidance (Dr McCoy) to produce the miracle cure.

Permalink to Comment

25. S Silverstein on April 5, 2009 1:07 PM writes...

I don't think AI has yet achieved the insight and intuition that can duplicate the combined thinking of multiple disciplines and experience for complexity of drug design.

AI has no intuition nor insight. It only has what limited snippets of such facets of human cognition a programmer can impart; since humans don't understand how their own minds work, those snippets are limited indeed.

However, these claims make for great PR. One wonders if there are marketing types behind it.

Then again, the while IT field's aura and mystique beyond the mundane (i.e., PriceLine.com) may be a product of marketing types.

Claims of cybernetic miracles have been made since the days of the ENIAC.

Permalink to Comment

26. Keith Robison on April 6, 2009 8:38 AM writes...

A report on the robot "Adam"'s findings was in last Friday's Science. Also, there is a commentary and podcast interview.

It should be noted that a very significant precedent has been set -- the robot is not one of the authors despite it generating the data! Perhaps they should have named it Rosalind.

Permalink to Comment

27. Jonadab the Unsightly One on April 7, 2009 8:48 PM writes...

Well, computers are very good at certain kinds of things. They're fantastically good at remembering, indexing, sorting based on predefined criteria, that sort of thing. And they don't get bored, no matter how tedious the work they're doing is, so they're really great for the truly mind-numbing stuff, like brute-force exhausting every possible permutation of a set.

But they never understand the implications of what they're doing. They just go on doing what they were told. So you have to tell them *everything*. If you want the computer to stop and tell a human if one of the reactions it runs is exothermic, you have to specifically tell it to watch for that. If you want it to stop and tell you if one of the compounds has significantly higher solubility in water than the others, you have to specifically tell it to watch for that. And so on.

So the quality of your output is going to greatly depend on the experience and cleverness and so forth of whoever programs the thing.

Permalink to Comment

28. Aamy Lee on June 5, 2009 5:46 AM writes...

Nice Article... I've a query about robotic system ... Can you help me please...

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
How Not to Do It: NMR Magnets
Allergan Escapes Valeant
Vytorin Actually Works
Fatalities at DuPont
The New York TImes on Drug Discovery
How Are Things at Princeton?
Phage-Derived Catalysts
Our Most Snorted-At Papers This Month. . .