About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily
In the Pipeline: Don't miss Derek Lowe's excellent commentary on drug discovery and the pharma industry in general at In the Pipeline

In the Pipeline

« We Don't Know Beans About Biotin | Main | Privileged Scaffolds »

March 24, 2010

Drugs And Their Starting Points

Email This Entry

Posted by Derek

I've spoken about fragment-based drug design and ligand efficiency here a few times. There's a new paper in J. Med. Chem. that puts some numbers on that latter concept. (Full disclosure - I've worked with its author, although I had nothing to do with this particular paper).

For the non-chemists in the crowd who want to know what I'm talking about, fragment-based methods are an attempt to start with smaller, weaker-binding chemical structures than we usually work with. But if you look at how much affinity you're getting for the size of the molecules, you find that some of these seemingly weaker compounds are actually doing a great job for their size. Starting from these and building out, with an eye along the way toward keeping that efficiency up, could be a way of making better final compounds than you'd get by starting from something larger.

Looking over a number of examples where the starting compounds can be compared to the final drugs (not a trivial data set to assemble, by the way), this work finds that drugs, compared to their corresponding leads, tend to have similar to slightly higher binding efficiencies, although there's a lot of variability. They also tend to have similar logP values, which is a finding that doesn't square with some previous analyses (which showed things getting worse during development). But drugs are almost invariably larger than their starting points, so no matter what, one of the keys is not to make the compounds greasier as you add molecular weight. (My "no naphthyls" rule comes from this, actually).

There are a few examples of notably poor ligand-efficient starting structures that have nonetheless been developed into drugs. Interestingly, several of these are the HIV protease inhibitors, with Reyataz (atazanavir) coming in as the least ligand-efficient drug in the whole data set. A look at its structure will suffice. The wildest one on the list appears to be no-longer-marketed amprenavir, whose original lead was 53 micromolar and weighed over 600, nasty numbers indeed. I would not recommend emulating that one. In case you're wondering, the most ligand efficient drug in the set is Chantix (varenicline).

In the cases where ligand efficiency actually went down along the optimization route, inspection of the final structures shows that in many cases, the discovery team was trading efficiency for some other property (PK, solubility, etc.) To me, that's another good argument to make things as efficient as you can, because that gives you something to trade. A big, chunky, lashed-together structure doesn't give you much room to maneuver.

Comments (27) + TrackBacks (0) | Category: Drug Assays | Drug Development


1. MedChem on March 24, 2010 9:58 AM writes...

Another useless paper stating the obvious but of little practical value to the chemists.

Permalink to Comment

2. Anonymous on March 24, 2010 10:02 AM writes...

LE is important especially when evaluating potential starting points. However most of medchem is optimizing PK, selectivity, and so on. LE is just a calculated property. You can improve you LE score by lowering your cLogP, even though your measured LogP hasn't changed. So have you improved anything? I've seen some drastic differences in cLogP and measured LogP that makes a LE calculation meaningless IMO.

Permalink to Comment

3. MedChem on March 24, 2010 10:25 AM writes...

My point is how is LE helping me by telling me what I already know.

Permalink to Comment

4. cynical1 on March 24, 2010 10:26 AM writes...

Amprenavir is still marketed as its prodrug, fosamprenavir.

Permalink to Comment

5. RM on March 24, 2010 12:31 PM writes...

While LE might not have much value for drugs as they are, I think the point of LE is in fragment based drug discovery.

Whether you believe Lipinski or not, the larger your drug, the more difficulties you have - at the very least a larger drug is going to take more work/resources to make.

In FBDD you start small and build up. Thus it makes sense to try to get the most bang-for-your buck in terms of size. If a piperidine and a dimethylamino give you the same increase in binding, go with the dimethylamino (the one with the better LE), as it will keep your molecule small as you progressively tack on more functional groups. Even if dimethylamine is only 0.75 times the binder of piperidine, it may make sense to go with it, because the extra 3 atoms you save may come in handy when you find that an isopropyl in another location improves efficiency even more. LE is a relatively decent rule of thumb to help optimize the activity/weight tradeoff.

Also, if you read the ligand efficiency post Derek linked, you can see another use for LE; judging how well a given compound does at maximizing binding. If you have a moiety that does sub-par on the LE scale, it's an indication that there may be another functionality that can replace it to bump the binding even further. There aren't many other measures that can be used to prospectively evaluate a moiety's effect.

The paper does have practical value, at least to chemists doing FBDD*. It points out that while LE does have value in FBDD, it isn't a be-all-end-all, as there are other considerations (such as optimizing PK, solubility, etc.) which aren't captured by looking at just the activity an weight. You may think this is "obvious", but even the most diligent scientist needs reminders every now and again, lest they become too stuck in their ways. Additionally, /We're Scientists/: hard numbers and referenceable papers beat vague speculation and arcane knowledge any day. Also, you wouldn't be calling the paper useless if it had come to the opposite conclusion. (Selective publication is not something we want to encourage - a "negative" result is still a result.)

*Do papers ever have *universal* value? Even the most groundbreaking work on circular dichroism, work that completely changes how CD is done, is a big yawn-fest to people who only do achiral or racemic work.

Permalink to Comment

6. HappyDog on March 24, 2010 12:37 PM writes...

Yes, it does seem rather obvious. However, I've worked with a number of chemists who thought so, but when I calculated the LE for the compounds vs. time on a project (for several projects), chemists were often surprised to see that their compounds were getting systematically less efficient over time. It's one thing to understand something qualitatively, and another to quantitate it.

Permalink to Comment

7. MedChem on March 24, 2010 12:47 PM writes...


True. But my point is knowing LE wouldn't have made a darn differenc for those chemists. I'm sure they ALREADY knew to make smaller and less hydrophobic compounds. It's just that they couldn't, as we all know this is easier said than done. It's not like they go "Darn, I wish I knew this LE stuff earlier so I could've paid more attention to lowing cLog P."

Permalink to Comment

8. HappyDog on March 24, 2010 3:03 PM writes...


"Darn, I wish I knew this LE stuff earlier so I could've paid more attention to lowering cLog P."

Actually, that's exactly what they said! It became apparent that the compounds were getting more potent because they were getting greasier. (Though one would think that adding six or seven fluorine atoms might have that effect.) When the project was stuck after several months of lead op, an analysis of LE over the the lifetime of the project prompted a re-evaluation of some of the earlier compounds that had weaker affinities but much better clogP values. The current plan was abandoned and a new direction was taken using some of those earlier compounds as a starting point.

The point is that there is a natural tendency to chase potency. A careful evaluation of LE early in a lead op or HtL program might reveal that some of the compounds which are undesirable at first glance due to their binding affinities might actually have the best LE's. Since the tendency is for LE to go down during the lifetime of a project while physical properties like logP and MW go up, it suggests that the compounds with the best LE rather than the most potent should be used as the starting point. I'd say that's especially critical if your shop starts with HTS hits, which tend to be large, hydrophobic and highly functionalized to start with. Adding more weight and hydrophobicity might improve potency, but is likely to reduce the chances that the hit is going to become a drug.

Permalink to Comment

9. alig on March 24, 2010 3:12 PM writes...

Re: HappyDog

"Since the tendency is for LE to go down during the lifetime of a project while physical properties like logP and MW go up"

Did you read the paper? Their conclusion was LE actually increased going from lead to drug. And ClogP stayed the same while MW went up. This led to a big increase in LLE going from lead to drug.

One of the best thing about this paper was showing the lead/drug pairs for 60 approved drugs. That information would be painful to go find.

Permalink to Comment

10. HappyDog on March 24, 2010 3:16 PM writes...

The topic of LE also reminds me of something else. If you take your initial hits from, say HTS, it's a good metric for a reductionist approach to drug discovery. You can take the screening hit and start breaking it down to find the pieces most essential for activity. In the past, groups I've worked with have been reluctant to do this because the potency often starts falling off precipitously. I've always argued that's perfectly fine - as long as the LE increases substantially. The goal, IMHO, should be to find the smallest piece(s) of the initial hit that has (have) the greatest LE. That's actually your hit. If you can't increase LE significantly while cutting back on the size of the molecule, I'd say the HTS hit probably isn't useful and certainly isn't an attractive place to start a hit to lead effort on.

Permalink to Comment

11. HappyDog on March 24, 2010 3:31 PM writes...


Yes, I read the paper. The operative point is that they compared lead/drug pairs. That implies that the med. chemists did a good job of keeping an eye on the properties as the program advanced.

Perhaps I should have been more specific. My point is that most lead op programs I've seen don't seem to follow this trend. The compounds keep getting bigger and more hydrophobic until a wall is hit with either potency or you get candidates that have poor physical properties and have little chance of being a drug. Unfortunately, most of our drug discovery efforts in the industry ultimately fail. From what I've seen, the successful ones keep a handle of physical properties and the LE's decrease or stay the same. Unfortunately, these successful cases appear to be the exceptions to a general trend. I can't count the number of times I've worked on a project where the compounds got more potent, but at the price of higher MW and logP. In my mind, that's the whole point of this paper - too many programs aren't keeping an eye on physical properties (or LE's), which suggests that's one of the reasons for the high failure rate.

Permalink to Comment

12. weirdo on March 24, 2010 6:21 PM writes...

HappyDog (#10) -- This is MedChem 101 so I must count myself surprised to hear of ANY project team doing what you've described. I'm 16 years in at this point, and on no project I have ever worked have we taken this approach. None.

The biologists hate it at first, but we don't work for them.

Permalink to Comment

13. Anonymous on March 24, 2010 6:48 PM writes...

If I know a way to discover a drug quickly, I won't tell any others. People tend to tell some rules on the drug discovery, actually, they want some other people to prove.

Permalink to Comment

14. HappyDog on March 25, 2010 7:34 AM writes...


I agree that it's med chem 101. I also agree that most of the posters here seem to know what they're talking about. I'll further concede that most med chemists I've worked with have a knowledge about issue of LE, and physical properties. I've worked for several companies over the years, and one of the things I've observed is med chemists pontificating about the pysical properties of compounds under consideration for synthesis, and often criticising other chemist's or modeler's suggestions on this basis . . . and then they turn around and make compounds that have much worse physical properties than the ones they were so critical of. When asked, there always seems to be an excuse. The list I've collected over the years goes something like this: "it's just an SAR compound", "it's a proof-of-concept compound to make sure that we can get potency", "management isn't happy with our progress and we need better (more potent) compounds", "we can always tweak MW/logP/PSA later", "there are examples of marketed drugs with this high / this many MW/logP/PSA/rotatable bonds/H-bond acceptors & donors". All too often, these ugly proof-of-concept compounds end up becoming the next lead that the team chases. I've even worked on one program where I pointed this out to the project leader and he confessed that the project was ultimately going to fail, but he was afraid to tell his management because it would make him look bad. The end result was a program that limped along for another year before the plug was pulled. Plenty of very potent compounds were made. Unfortunately, they were all crap and everyone on the project at the time knew it. So, clearly these med chemists were aware of these basic principles. They just weren't using them.

Permalink to Comment

15. Will on March 25, 2010 8:37 AM writes...

Re - amprenavir, what was the "lead compound" that Derek is talking about?

And is a MW of 600 so nasty? Amprenavir itself, which was approved has a MW ~500, and fosamprenavir, which is still marketed, has a MW 623. My review of protease inhibitor (thanks wiki!) shows that all the approved compounds are at least ~500, and many are over 600

Permalink to Comment

16. MedChem on March 25, 2010 9:48 AM writes...

--"If you can't increase LE significantly while cutting back on the size of the molecule, I'd say the HTS hit probably isn't useful and certainly isn't an attractive place to start a hit to lead effort on."

--"So, clearly these med chemists were aware of these basic principles. They just weren't using them."


I can tell you're a computational scientist :) by your comments such as above. With all due respect, your thinking reveals a common flaw I find with computational chemists--they're too academic and naive.

Let me tell you the real reason for what you experienced with your medicinal chemsts: We the chemists DO worry about both physical properties and potency. The fact of the matter is often times we simply CANNOT get the best of both worlds, period. It's not that we don't want to or don't try, we just CAN'T because it's so difficult!

This is also why it's so frustrating to me that the non-chemist "experts" keep throwing these common sense rules at us while screaming "why don't you follow these???!!!"

Permalink to Comment

17. Happy Dog on March 25, 2010 10:32 AM writes...


OUCH! Academic and naive? That really hurts.

If I may respond in kind, a common flaw I find in med chemists is an obsession with potency.

The point I was trying to make is that if you see during the course of your program your compounds get less and less druglike without improving efficiency, you might have a problem.

I've worked in pharma with very good HTS shops and very bad HTS shops (not saying who). I've seen the good ones deconstruct quality hits during HTL to get a smaller, less potent, but efficient compound as a lead. Those programs progressed to lead declaration very rapidly. Those that couldn't (or wouldn't) end up with programs that end up wasting time without developing a progressable compound.

Just for your information, this wasn't an academic exercise - I learned this strategy from experienced med chemists who actually ran the project teams.

Permalink to Comment

18. MedChem on March 25, 2010 2:25 PM writes...

Happy dog,

I apologize. I didn't mean to insult you. You're unfortunately correct that too many medicinal chemists are obssessed with potency. As someone else pointed out earlier I was too surprised the chemists you worked with didn't seem to know this medchem 101.

But what happens more often is that the chemists do try to have the best of both worlds but just can't.

Permalink to Comment

19. Morten G on March 26, 2010 5:01 AM writes...

@Happy Dog:

If you are going to deconstruct your lead from HTS into fragments anyway then why don't you start with a fragment screen?

Permalink to Comment

20. HappyDog on March 26, 2010 7:43 AM writes...

Morten G

That would actually be my preference, but in the real world you're not always given a choice. You have to work on the projects you're assigned.

As an example, if you're given an HTS hit with micromolar affinity, MW 650+, logP 5.0+, and >10 heteroatoms and you're ordered to produce a lead compound, what exactly should the path forward be?

Permalink to Comment

21. It's a Puzzler on March 27, 2010 7:51 AM writes...

Mmmm, as a Medicinal Chemist, it's interesting/annoying/embarrassing that the "Small Molecule Attrition" debate seems to be being led intellectually by Computational Chemists...

Permalink to Comment

22. It's a Puzzler on March 27, 2010 7:58 AM writes...

...and that means arguments based on calculated properties. Which is exactly the kind the format that senior managers love: "two legs bad, four legs good" anyone?

Permalink to Comment

23. Common Sense on March 27, 2010 3:00 PM writes...

This type of discussion quickly becomes tedious. I've never seen a new "in vogue" approach succeed in it's advertised ultimate, or even approach anything near what the "believers" want you to think.

Fragment screening may work for some, a few forgiving systems, but not for everything. Lots of reasons for this, including dynamics of ligand-scaffold complexes upon binding that can't be replicated with compound-parts, or fragments. Computational chemists simply can't predict these outcomes de novo, but highly advertise their concept of success when they have a model from the answer as provided by some type of experimental method.

Permalink to Comment

24. Common Sense on March 27, 2010 3:03 PM writes...

Happy Dog:

Make a lot of analogues, and hope you get lucky (which still is a big contributer to making new drugs, no matter how hard others may want to say otherwise).

Permalink to Comment

25. Common Sense on March 27, 2010 3:13 PM writes...

Happy Dog:

Physical or computational chemical rules are guides, but only that. There are many exceptions to every and all such generalized rule sets that have been devised. Sometimes the best and only approach is for the synthetic chemists to roll up their sleeves, make a wide range of compounds covering a broad structural space, and hope to get lucky.

It can be hard to admit, but often gets you there faster.

Permalink to Comment

26. HappyDog on March 29, 2010 7:49 AM writes...

Good comments from all. Hopefully this will be my last post on the subject. I'm not trying to come across as a SBDD approach is the answer to all things type. It's just one tool that's useful for particular targets under certain circumstances. The good comp chemists know what those are. I've had the misfortune to work early on in my career with CNS targets where the primary assay had nothing to do with the animal model (which had nothing to do with the disease), and saw first hand how limited structure-based approaches are. Simply put, if an approach I'm experienced in might work for a given target, I'll use it. If not, I won't and I'm not afraid to tell my management it's a waste of my time. I also appreciate the role of serindipity in drug discovery, but animal studies are expensive and politically sensitive - you can't make thousands of compounds at random anymore and test all of them in animals.

MedChem - I agree that too many med chemists now don't seem to understant 'Med Chem 101'. The problem, IMHO, is that chemists have been treated like assembly line workers in non-union shop rather than proper scientists for too long. Project managers don't want chemists that think rationally about the problem at hand. Rather, for years they've wanted people just to crank out lots of compounds, regardless of whether or not they make sense.

Permalink to Comment

27. Lashandra Raynoso on March 1, 2012 3:24 PM writes...

It is indeed my belief that mesothelioma is actually the most fatal cancer. It contains unusual characteristics. The more I really look at it the harder I am persuaded it does not respond like a real solid human cancer. If perhaps mesothelioma is a rogue virus-like infection, then there is the chance of developing a vaccine and offering vaccination to asbestos uncovered people who are at high risk regarding developing foreseeable future asbestos relevant malignancies. Thanks for revealing your ideas about this important health issue.

Permalink to Comment


Remember Me?


Email this entry to:

Your email address:

Message (optional):

The Last Post
The GSK Layoffs Continue, By Proxy
The Move is Nigh
Another Alzheimer's IPO
Cutbacks at C&E News
Sanofi Pays to Get Back Into Oncology
An Irresponsible Statement About Curing Cancer
Oliver Sacks on Turning Back to Chemistry