Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« More Fukuyama Corrections | Main | GSK Dismisses Employees in Bribery Scandal. Apparently. »

April 4, 2014

Ancient Modeling

Email This Entry

Posted by Derek

I really got a kick out of this picture that Wavefunction put up on Twitter last night. It's from a 1981 article in Fortune, and you'll just have to see the quality of the computer graphics to really appreciate it.

That sort of thing has hurt computer-aided drug design a vast amount over the years. It's safe to say that in 1981, Merck scientists did not (as the article asserts) "design drugs and check out their properties without leaving their consoles". It's 2014 and we can't do it like that yet. Whoever wrote that article, though, picked those ideas up from the people at Merck, with their fuzzy black-and-white monitor shots of DNA from three angles. (An old Evans and Sutherland terminal?) And who knows, some of the Merck folks may have even believed that they were close to doing it.

But computational power, for the most part, only helps you out when you already know how to calculate something. Then it does it for you faster. And when people are impressed (as they should be) with all that processing power can do for us now, from smart phones on up, they should still realize that these things are examples of fast, smooth, well-optimized versions of things that we know how to calculate. You could write down everything that's going on inside a smart phone with pencil and paper, and show exactly what it's working out when it display this pixel here, that pixel there, this call to that subroutine, which calculates the value for that parameter over there as the screen responds to the presence of your finger, and so on. It would be wildly tedious, but you could do it, given time. Someone, after all, had to program all that stuff, and programming steps can be written down.

The programs that drove those old DNA pictures could be written down, too, of course, and in a lot less space. But while the values for which pixels to light up on the CRT display were calculated exactly, the calculations behind those were (and are) a different matter. A very precise-looking picture can be drawn and animated of an animal that does not exist, and there are a lot of ways to draw animals that do not exist. The horse on your screen might look exact in every detail, except with a paisley hide and purple hooves (my daughter would gladly pay to ride one). Or it might have a platypus bill instead of a muzzle. Or look just like a horse from outside, but actually be filled with helium, because your program doesn't know how to handle horse innards. You get the idea.

The same for DNA, or a protein target. In 1981, figuring out exactly what happened as a transcription factor approached a section of DNA was not possible. Not to the degree that a drug designer would need. The changing conformation of the protein as it approaches the electrostatic field of the charged phosphate residues, what to do with the water molecules between the two as they come closer, the first binding event (what is it?) between the transcription factor and the double helix, leading to a cascade of tradeoffs between entropy and enthalpy as the two biomolecules adjust to each other in an intricate tandem dance down to a lower energy state. . .that stuff is hard. It's still hard. We don't know how to model some of those things well enough, and the (as yet unavoidable) errors and uncertainties in each step accumulate the further you go along. We're much better at it than we used to be, and getting better all the time, but there's a good way to go yet.

But while all that's true, I'm almost certainly reading too much into that old picture. The folks at Merck probably just put one of their more impressive-looking things up on the screen for the Fortune reporter, and hey, everyone's heard of DNA. I really don't think that anyone at Merck was targeting protein-DNA interactions 33 years ago (and if they were, they splintered their lance against that one, big-time). But the reporter came away with the impression that the age of computer-designed drugs was at hand, and in the years since, plenty of other people have seen progressively snazzier graphics and thought the same thing. And it's hurt the cause of modeling for them to think that, because the higher the expectations get, the harder it is to come back to reality.

Update: I had this originally as coming from a Forbes article; it was actually in Fortune.

Comments (22) + TrackBacks (0) | Category: Drug Industry History | In Silico


COMMENTS

1. LeeH on April 4, 2014 8:37 AM writes...

I wish all medicinal chemists had your level of expectation, Derek. It would make our job a lot easier.

Permalink to Comment

2. Esteban on April 4, 2014 8:41 AM writes...

Reminds me of the current hoopla over Big Data. More data doesn't guarantee more insight any more than more CPU/memory guarantees in silico drug discovery.

The term 'subroutine' takes me back - I haven't heard that term since I took Fortran programming way back when. I'm guessing Fortran is the programming language on which the author cut his teeth.

Permalink to Comment

3. annon too on April 4, 2014 9:04 AM writes...

I know of a med chemist who once went to an associate of his, saying that the in vitro data that had been generated had to be wrong, and needed to be done again, as the results that were generated did not agree with what the modeling "minimal energy" calculations. The associate's reply was that the evaluation was done as any of the other of the many compounds that had been evaluated, and when there was a good result (independent of any modeling) that the med chemist did not question such numbers. So, why should these be questioned? The med chemist went away, silent. The biochemist had the "modeled" compound rerun anyway, with the same result as the first evaluation. Even so, the med chemist would use modeling to guide his nose for many years, without question of what is was suggesting, or when it might not be the entire story, or even be wrong.

Permalink to Comment

4. Wavefunction on April 4, 2014 9:05 AM writes...

As usual you beat me to it Derek. For anyone who wants the exact reference it's the Fortune magazine issue from Oct 5, 1981. The reference is widely considered to be both the time when CADD came to the attention of the masses, as well as a classic lesson in hype. The article itself is really odd since most of it is about computer-aided design in the industrial, construction and aeronautical fields; these are fields where the tools have actually worked exceedingly well. The part about drug design was almost a throwaway with almost no explanation in the text.

Another way to look at the issue is to consider a presentation by Peter Goodford in 1989 (cited in a highly readable perspective by John van Drie (J Comput Aided Mol Des (2007) 21:591–601) in which he laid out the major problems in molecular modeling - things like including water, building homology models, calculating conformational changes, predicting solubility, predicting x-ray conformations etc. What's interesting is that - aside from homology modeling and x-ray conformations - we are struggling with the exact same problems today as we were in the 80s.

That doesn't mean we haven't made any progress though. Far from it in fact. Even though many of these problems are still unsolved on a general level, the number of successful specific examples is on the rise so at some point we should be able to derive a few general principles. In addition we have made a huge amount of progress in understanding the issues, dissecting the various operational factors and in building up a solid database of results. Fields like homology modeling have actually seen very significant advances, although that's as much because of the rise of the PDB which was enabled through crystallography as accurate sequence comparison and threading algorithms. We are also now aware of the level of validation that our results need to have for everyone to take them seriously. Journals are implementing new standards for reproducibility and knowledge of the right statistical validation techniques is becoming more widespread; as Feynman warned us, hopefully this will stop us from fooling ourselves.

As you mention however, the disproportionate growth of hardware and processing power relative to our understanding of the basic physics of drug-protein interaction has led to an illusion of understanding and control. For instance it's quite true that no amount of simulation time and smart algorithms will help us if the underlying force fields are inaccurate and ill-tested. Thus you can beat every motion out of a protein until the cows come home and you still might not get accurate binding energies. That being said, we also have to realize that every method's success needs to be judged in terms of a particular context and scale. For instance an MD simulation on a GPCR might get some of the conformational details of specific residues wrong but may still help us rationalize large-scale motions that can be compared with experimental parameters. Some of the more unproductive criticism in the field has come from people who have the wrong expectations from a particular method to begin with.

Personally I am quite optimistic with the progress we have made. Computational drug design has actually followed the classic Gartner Hype curve, and it's only in the 2000s that we have reached that cherished plateau of realistic expectations. The hope is that at the very least this plateau will have a small but consistent positive slope.

Permalink to Comment

5. luysii on April 4, 2014 9:09 AM writes...

Pictures are incredibly seductive to us all, especially scientific types. Nearly half our cerebral cortex is presumably involved in processing visual information (or at least half of it shows increased blood flow on functional MRI when doing so).

When the first pictures of proteins came out using Xray crystallography, they were always said to show THE structure of a protein. Conformational variability was essentially forgotten due to the spectacular images.

Permalink to Comment

6. watcher on April 4, 2014 9:09 AM writes...

To me, modeling often fails since it is still poor in pro-actively predicting and anticipating the movement(s) associated with protein targets, and in particular, the events leading and subsequent to the initial binding events. These dynamics can often be surmised in retrospect, but difficult to predict as part of a drug design program.

Permalink to Comment

7. Anonymous on April 4, 2014 10:08 AM writes...

I think it has also hurt pharma companies a lot, in that non-scientist top managers at one point has begun to think that computers (or robots, same thing for combichem) could do all the work, a million times more work, and this could replace all those weird scientist and their ingenuity, intuition and fantasy...
Alas, it didn't turn out that well but jobs, competencies and chances were lost...

Permalink to Comment

8. jtd7 on April 4, 2014 10:34 AM writes...

Ron Kaback used to call 3-D protein modeling "Molecular Pornography." At first you get real excited because it shows you EVERYTHING! But once you've seen a few of them, you realize they all look the same.

(I like to tell that story. I hope it doesn't detract from your thoughtful post.)

Permalink to Comment

9. anon the II on April 4, 2014 10:35 AM writes...

In 1997, Forbes sent a team to our labs to document this new thing called combinatorial chemistry. I was in charge of the technical details of making it happen. I got interviewed by a journalist who seemed to have written the story already. My robot made the front cover, though I never got mentioned. Only the managers got press. When the article appeared (Moukheiber, Zina. Forbes. 1/26/1998, Vol. 161 Issue 2, p76-81. 6p. 5 Color Photographs, 2 Charts.), I didn’t recognize anything in it. It was an almost complete fabrication. The business guys were happy to see us associated with "cutting edge" technology. I was horrified. Maybe that’s what happened to the Merck scientists here.

Permalink to Comment

10. CMCguy on April 4, 2014 10:57 AM writes...

As #9 suggests CADD and then later combichem followed by or alongside several other "game changing" ways to accelerate drug discovery that were hyped as the future of pharma R. I have seen benefits emerge in many technologies but in limited ways and never close to the promises sold upon to Management (Science and Business) without clear recognition there are probably no quick and easy ways to find new medicines or treatments. I see many of the new means became distractions that are partly responsible for lack of productivity in past few decades since significant resources directed to them that might have been focused elsewhere (based on the general success rate not sure would have helped overall but probably would have gotten a few good drugs I would hope).

Permalink to Comment

11. newnickname on April 4, 2014 3:58 PM writes...

@9 anon the II: Combi Chem in 1997? Somebody was late to that party. But anyway ...

The MBA-hole biotech managers where I worked fell for the Huge, Expensive SGI 3D Workstation Protein Modeling package even though we (chemists) only wanted a suped up Mac and a copy of Spartan or Cache or similar small molecule modeling program. The MBAs fell in love with the beautiful proteins and DNAs and blobs and colors and the look of actually discovering something. I think we spent around $250k to buy the SGI (with CrystalEyes 3D glasses!) and a multi-year software license to (I won't name them; it wasn't their fault). I pointed out that we will never use it (way too complicated) but the MBAs wanted to impress the investors ... who never invested any more money into that sink hole.

All the chemists used the SGI for was to play with the Flight Simulator.

Permalink to Comment

12. Puhn on April 4, 2014 4:31 PM writes...

If ‘we don't know how to model some of those things well enough’ then we should model those that can be modeled. Seems the team of this publication – ‘Discovery of potent, novel Nrf2 inducers via quantum modeling, virtual screening and in vitro experimental validation’ (Chem Biol Drug Des. 2012 December) – found the right modeling approach and the results speak for themselves (as well as the not-at-all-snazzy graphics).

Permalink to Comment

13. Rich Lambert on April 4, 2014 7:09 PM writes...

Your observations about computer modeling has wide application to other scientific fields beyond chemistry.

Permalink to Comment

14. Anon anon anon on April 4, 2014 11:13 PM writes...

@Wavefunction #4: You said, "We are also now aware of the level of validation that our results need to have for everyone to take them seriously."

What's the minimum level of validation that you would require for everyone to take results from e.g. a new docking software seriously? Good performance on a benchmark? One successful prospective validation? One hundred?

Permalink to Comment

15. anon the III on April 5, 2014 2:24 AM writes...

I am not so optimistic as Wavefunction. To many articles and talks about matched pairs, phys-chem property correlations to complex endpoints, homology models of gpcrs, the next docking, polypharmacology, qsar, retrospective stuff - all of these more than once missing the point and my personal "level of validation that our results need to have for everyone to take them seriously". Alone the articles in the last two years about how to apply statistics ...
The most impacting developments for prospective applied CompChem in drug design for me: ~2007 start to handle the importance of water, ~ 2004 Shape based searches; rise of public available chemical data, ~2000 circular fingerprints, ~ 1990 diverse scoring functions for docking, at all times small steps in QM.
Long way to go ...

Permalink to Comment

16. David Borhani on April 6, 2014 9:44 PM writes...

@12: Puhn, you're kidding, right? Those molecules aren't drugs...

Permalink to Comment

17. Gordonjcp on April 7, 2014 3:21 AM writes...

But that's how it works, right? You want to discover a drug so you put it in some sort of scanner and a bunch of you cluster round the screen and say things like "Zoom and enhance that bit" and eventually you see all the DNA strands and that's how you discover what's really wrong with the patient, right?

I mean, I saw it on television, so that must be how it works...

Permalink to Comment

18. anon on April 7, 2014 4:12 AM writes...

zzzzz

Liked the screenshot Derek, but new topic please folks... Been over this ground here many times and heard the same old anecdotes (#3) and unlikely extrapolations (#7).

Permalink to Comment

19. a. nonymaus on April 7, 2014 9:32 AM writes...

As usual, computer simulations are like handguns and tequila. They let you make mistakes faster.

Permalink to Comment

20. Anonymous on April 7, 2014 11:14 AM writes...

I'd really like to know what would convince the skeptics in the audience. It's proper to be skeptical of course but, if we're claiming to be scientists, then there must be *some* level of evidence that would sway your opinion. Anyone willing to put a stake in the ground?

Permalink to Comment

21. Puhn on April 9, 2014 2:50 AM writes...

@16 David Borhani - Indeed those molecules are not drugs. I just give an example of a modeling approach that helped the team to find novel Nrf2 activators with high potency, low toxicity and blood-brain barrier permeability by experimentally testing only 12 molecules identified by the model - some from structural classes not in the training set. Also the graphics show in colour the 3D area of the molecule that carries the property of an Nrf2 activator.

Permalink to Comment

22. anon the II on April 9, 2014 12:44 PM writes...

@ newnickname

Thanks for the snark. We were in it for a while when Forbes came to visit. When I said "new", I meant to Forbes.

Permalink to Comment

POST A COMMENT




Remember Me?



EMAIL THIS ENTRY TO A FRIEND

Email this entry to:

Your email address:

Message (optional):




RELATED ENTRIES
The Worst Seminar
Conference in Basel
Messed-Up Clinical Studies: A First-Hand Report
Pharma and Ebola
Lilly Steps In for AstraZeneca's Secretase Inhibitor
Update on Alnylam (And the Direction of Things to Come)
There Must Have Been Multiple Chances to Catch This
Weirdly, Tramadol Is Not a Natural Product After All