« The Myth of the STEM Shortage, In Detail |
| Stack Ranking in Pharma: Bad Idea »
September 4, 2013
More Thoughts on Compound Metrics
Over at Practical Fragments, Dan Erlanson has comments on the Michael Shultz paper that I wrote about here. He goes into details on some of the problems that turn up when you try to apply various compound metrics across a broad range of molecular weights and/or lipophilicities. In the most obvious example, the indices that are based on Heavy Atom Count (HAC) will jump around much more in the low-molecular-weight range, and none of the proposed refinements can quite fix this. And with the alternative LELP measure, you have to watch out when you're at very low LogP values.
Shultz's preferred LipE/LLE metric avoids that problem, and it's size-independent as well. That part can be either a bug or a feature, depending on your perspective. For the most part, I think that's useful, but in the early stages of fragment optimization, I think that a size-independent measurement is not what you want. The whole point in that stage is to pick the starting points with the most binding for their size, and a well-designed fragment library shouldn't have too many big problems with lipophilicity (those will come along later). So I take Shultz's point about the validity of LLE in general, but I think that I'll be using it and either LE or BEI (HAC-driven and molecular-weight driven) measure of binding efficiency when I'm working in the fragment end of things. How to weight those will be a judgment call, naturally, but judgment calls are, in theory, what we're being paid for, right?
+ TrackBacks (0) | Category: Drug Assays | In Silico
POST A COMMENT
- RELATED ENTRIES
- Why Not Bromine?
- Fragonomics, Eh?
- Amicus Fights Its Way Through in Fabry's
- Did Pfizer Cut Back Some of Its Best Compounds?
- Don't Optimize Your Plasma Protein Binding
- Fluorinated Fingerprinting
- One of Those Days
- ChemDraw Days