About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
Not Voodoo

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
Realizations in Biostatistics
ChemSpider Blog
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Eye on FDA
Chemical Forums
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa

Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
Gene Expression (I)
Gene Expression (II)
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net

Medical Blogs
DB's Medical Rants
Science-Based Medicine
Respectful Insolence
Diabetes Mine

Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem

Politics / Current Events
Virginia Postrel
Belmont Club
Mickey Kaus

Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

« Pharma 101 | Pharmacokinetics | Press Coverage »

August 21, 2014

Why Not Bromine?

Email This Entry

Posted by Derek

So here's a question for the medicinal chemists: how come we don't like bromoaromatics so much? I know I don't, but I have trouble putting my finger on just why. I know that there's a ligand efficiency argument to be made against them - all that weight, for one atom - but there are times when a bromine seems to be just the thing. There certainly are such structures in marketed drugs. Some of the bad feelings around them might linger from the sense that it's sort of unnatural element, as opposed to chlorine, which in the form of chloride is everywhere in living systems.

But bromide? Well, for what it's worth, there's a report that bromine may in fact be an essential element after all. That's not enough to win any arguments about putting it into your molecules - selenium's essential, too, and you don't see people cranking out the organoselenides. But here's a thought experiment: suppose you have two drug candidate structures, one with a chlorine on an aryl ring and the other with a bromine on the same position. If they have basically identical PK, selectivity, preliminary tox, and so on, which one do you choose to go on with? And why?

If you chose the chloro derivative (and I think that most medicinal chemists instinctively would, for just the same hard-to-articulate reasons we're talking about), then what split in favor of the bromo compound would be enough to make you favor it? How much more activity, PK coverage, etc. do you need to make you willing to take a chance on it instead?

Comments (36) + TrackBacks (0) | Category: Drug Development | Odd Elements in Drugs | Pharmacokinetics | Toxicology

August 19, 2014

Don't Optimize Your Plasma Protein Binding

Email This Entry

Posted by Derek

Here's a very good review article in J. Med. Chem. on the topic of protein binding. For those outside the field, that's the phenomenon of drug compounds getting into the bloodstream and then sticking to one or more blood proteins. Human serum albumin (HSA) is a big player here - it's a very abundant blood protein that's practically honeycombed with binding sites - but there are several others. The authors (from Genentech) take on the disagreements about whether low plasma protein binding is a good property for drug development (and conversely, whether high protein binding is a warning flag). The short answer, according to the paper: neither one.

To further examine the trend of PPB for recently approved drugs, we compiled the available PPB data for drugs approved by the U.S. FDA from 2003 to 2013. Although the distribution pattern of PPB is similar to those of the previously marketed drugs, the recently approved drugs generally show even higher PPB than the previously marketed drugs (Figure 1). The PPB of 45% newly approved drugs is >95%, and the PPB of 24% is >99%. These data demonstrate that compounds with PPB > 99% can still be valuable drugs. Retrospectively, if we had posed an arbitrary cutoff value for the PPB in the drug discovery stage, we could have missed many valuable medicines in the past decade. We suggest that PPB is neither a good nor a bad property for a drug and should not be optimized in drug design.

That topic has come up around here a few times, as could be expected - it's a standard med-chem argument. And this isn't even the first time that a paper has come out warning people that trying to optimize on "free fraction" is a bad idea: see this 2010 one from Nature Reviews Drug Discovery.

But it's clearly worth repeating - there are a lot of people who get quite worked about about this number - in some cases, because they have funny-looking PK and are trying to explain it, or in some cases, just because it's a number and numbers are good, right?

Comments (14) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics

July 14, 2014

Modifying Red Blood Cells As Carriers

Email This Entry

Posted by Derek

What's the best carrier to take some sort of therapeutic agent into the bloodstream? That's often a tricky question to work out in animal models or in the clinic - there are a lot of possibilities. But what about using red blood cells themselves?

That idea has been in the works for a few years now, but there's a recent paper in PNAS reporting on more progress (here's a press release). Many drug discovery scientists will have encountered the occasional compound that partitions into erythrocytes all by itself (those are usually spotted by their oddly long half-lives after in vivo dosing, mimicking the effect of plasma protein binding). One of the early ways that people have attempted to try this deliberately was forcing a compound into the cells, but this tends to damage them and make them quite a bit less useful. A potentially more controllable method would be to modify the surfaces of the RBCs themselves to serve as drug carriers, but that's quite a bit more complex, too. Antibodies have been tried for this, but with mixed success.

That's what this latest paper addresses. The authors (the Lodish and Ploegh groups at Whitehead/MIT) introduce modified surface proteins (such as glycophorin A) that are substrates for Ploegh's sortase technology (two recent overview papers), which allows for a wide variety of labeling.

Experiments using modified fetal cells in irradiated mice gave animals that had up to 50% of their RBCs modified in this way. Sortase modification of these was about 85% effective, so plenty of label can be introduced. The labeling process doesn't appear to affect the viability of the cells very much as compared to wild-type - the cells were shown to circulate for weeks, which certainly breaks the records held by the other modified-RBC methods.

The team attached either biotin tags and specific antibodies to both mouse and human RBCs, which would appear to clear the way for a variety of very interesting experiments. (They also showed that simultaneous C- and N-terminal labeling is feasible, to put on two different tags at once). Here's the "coming attractions" section of the paper:

he approach presented here has many other possible applications; the wide variety of possible payloads, ranging from proteins and peptides to synthetic compounds and fluorescent probes, may serve as a guide. We have conjugated a single-domain antibody to the RBC surface with full retention of binding specificity, thus enabling the modified RBCs to be targeted to a specific cell type. We envision that sortase-engineered cells could be combined with established protocols of small-molecule encapsulation. In this scenario, engineered RBCs loaded with a therapeutic agent in the cytosol and modified on the surface with a cell type-specific recognition module could be used to deliver payloads to a precise tissue or location in the body. We also have demonstrated the attachment of two different functional probes to the surface of RBCs, exploiting the subtly different recognition specificities of two distinct sortases. Therefore it should be possible to attach both a therapeutic moiety and a targeting module to the RBC surface and thus direct the engineered RBCs to tumors or other diseased cells. Conjugation of an imaging probe (i.e., a radioisotope), together with such a targeting moiety also could be used for diagnostic purposes.

This will be worth keeping an eye on, for sure, both as a new delivery method for small (and not-so-small) molecules, fof biologics, and for its application to all the immunological work going on now in oncology. This should keep everyone involved busy for some time to come!

Comments (7) + TrackBacks (0) | Category: Biological News | Chemical Biology | Pharmacokinetics

June 2, 2014

Single-Cell Compound Measurements - Now In A Real Animal

Email This Entry

Posted by Derek

Last year I mentioned an interesting paper that managed to do single-cell pharmacokinetics on olaparib, a poly(ADP) ribose polymerase 1 (PARP1) inhibitor. A fluorescently-tagged version of the drug could be spotted moving into cells and even accumulating in the nucleus. The usual warnings apply: adding a fluorescent tag can disturb the various molecular properties that you're trying to study in the first place. But the paper did a good set of control experiments to try to get around that problem, and this is still the only way known (for now) to get such data.

The authors are back with a follow-up paper that provides even more detail. They're using fluorescence polarization/fluorescence anisotropy microscopy. That can be a tricky technique, but done right, it provides a lot of information. The idea (as the assay-development people in the audience well know) is that when fluorescent molecules are excited by polarized light, their emission is affected by how fast they're rotating. If the rotation is slowed down to below the fluorescence lifetime of the molecules (as happens when they're bound to a protein), then you see more polarization in the emitted light, but if the molecules are tumbling around freely, that's mostly lost. There are numerous complications - you need to standardize each new system according to how much things change in increasingly viscous solutions, the fluorophores can't get too close together, you have to be careful with the field of view in your imaging system to avoid artifacts - but that's the short form.

In this case, they're using near-IR light to do the excitation, because those wavelengths are well known to penetrate living cells well. Their system also needs two photons to excite each molecule, which improves signal-to-noise and the two-photon dye is a BODIPY compound. These things have been used in fluorescence studies with wild abandon for the past few years - at one point, I was beginning to think that the acronym was a requirement to get a paper published in Chem. Comm. They have a lot of qualities (cell penetration, fluorescence lifetime, etc.) that make them excellent candidates for this kind of work.

This is the same olaparib/BODIPY hybrid used in the paper last year, and you see the results. The green fluorescence is nonspecific binding, while the red is localized to the nuclei, and doesn't wash out. If you soak the cells with unlabeled olaparib beforehand, though, you don't see this effect at all, which also argues for the PARP1-bound interpretation of these results. This paper takes things even further, though - after validating this in cultured cells, they moved on to live mice, using an implanted window chamber over a xenograft.

And they saw the same pattern: quick cellular uptake of the labeled drug on infusion into the mice, followed by rapid binding to nuclear PARP1. The intracellular fluorescence then cleared out over a half-hour period, but the nuclear-bound compound remained, and could be observed with good signal/noise. This is the first time I've seen an experiment like this. Although it's admittedly a special case (which takes advantage of a well-behaved fluorescently labeled drug conjugate, to name one big hurdle), it's a well-realized proof of concept. Anything that increases the chances of understanding what's going on with small molecules in real living systems is worth paying attention to. It's interesting to note, by the way, that the olaparib/PARP1 system was also studied in that recent whole-cell thermal shift assay technique, which does not need modified compounds. Bring on the comparisons! These two techniques can be used to validate each other, and we'll all be better off.

Comments (4) + TrackBacks (0) | Category: Biological News | Chemical Biology | Pharmacokinetics

March 25, 2014

A New Way to Study Hepatotoxicity

Email This Entry

Posted by Derek

Every medicinal chemist fears and respects the liver. That's where our drugs go to die, or at least to be severely tested by that organ's array of powerful metabolizing enzymes. Getting a read on a drug candidate's hepatic stability is a crucial part of drug development, but there's an ever bigger prize out there: predicting outright liver toxicity. That, when it happens, is very bad news indeed, and can torpedo a clinical compound that seemed to be doing just fine - up until then.

Unfortunately, getting a handle on liver tox has been difficult, even with such strong motivation. It's a tough problem. And given that most drugs are not hepatotoxic, most of the time, any new assay that overpredicts liver tox might be even worse than no assay at all. There's a paper in the latest Nature Biotechnology, though, that looks promising.

What the authors (from Stanford and Toronto) are doing is trying to step back to the early mechanism of liver damage. One hypothesis has been that the production of reactive oxygen species (ROS) inside hepatic cells is the initial signal of trouble. ROS are known to damage biomolecules, of course. But more subtly, they're also known to be involved in a number of pathways used to sense that cellular damage (and in that capacity, seem to be key players in inducing the beneficial effects of exercise, among other things). Aerobic cells have had to deal with the downsides of oxygen for so long that they've learned to make the most of it.
This work (building on some previous studies from the same group) uses polymeric nanoparticles. They're semiconductors, and hooked up to be part of a fluorescence or chemiluminescence readout. (They use FRET for peroxynitrite and hypochlorite detection, more indicative of mitochondrial toxicity, and CRET for hydrogen peroxide, more indicative of Phase I metabolic toxicity). The particles are galactosylated to send them towards the liver cells in vivo, confirmed by necropsy and by confocal imaging. The assay system seemed to work well by itself, and in mouse serum, so they dosed it into mice and looked for what happened when the animals were given toxic doses of either acetominophen or isoniazid (both well-known hepatotox compounds at high levels). And it seems to work pretty well - they could image both the fluorescence and the chemiluminescence across a time course, and the dose/responses make sense. It looks like they're picking up nanomolar to micromolar levels of reactive species. They could also show the expected rescue of the acetominophen toxicity with some known agents (like GSH), but could also see differences between them, both in the magnitude of the effects and their time courses as well.

The chemiluminescent detection has been done before, as has the FRET one, but this one seems to be more convenient to dose, and having both ROS detection systems going at once is nice, too. One hopes that this sort of thing really can provide a way to get a solid in vivo read on hepatotoxicity, because we sure need one. Toxicologists tend to be a conservative bunch, with good reason, so don't look for this to revolutionize the field by the end of the year or anything. But there's a lot of promise here.

There are some things to look out for, though. For one, since these are necessarily being done in rodents, there will be differences in metabolism that will have to be taken into account, and some of those can be rather large. Not everything that injures a mouse liver will do so in humans, and vice versa. It's also worth remembering that hepatotoxicity is also a major problem with marketed drugs. That's going to be a much tougher problem to deal with, because some of these cases are due to overdose, some to drug-drug interactions, some to drug-alcohol interactions, and some to factors that no one's been able to pin down. One hopes, though, that if more drugs come through that show a clean liver profile that these problems might ameliorate a bit.

Comments (13) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics | Toxicology

March 21, 2014

Dosing by Body Surface Area

Email This Entry

Posted by Derek

We were talking about allometry around here the other day, which prompts me to mention this paper. It used the reports of resveratrol dosing in animals, crudely extrapolated to humans, to argue that the body surface area normalization (BSA) method was a superior technique for dose estimation across species.

Over the years, though, the BSA method has taken some flak in the literature. It's most widely used in oncology, especially with cytotoxics, but there have been calls to move away from the practice, calling it a relic with little scientific foundation. (The rise of a very obese patient population has also led to controversy about whether body weight or surface area is a more appropriate dose-estimation method in those situations). At the same time, it's proven useful in some other situations, so it can't be completely ignored.

But it seems that the FASEB paper referenced in the first paragraph, which has been cited hundreds of times since 2008, may be overstating its conclusions. For example, it says that "BSA normalization of doses must be used to determine safe starting doses of new drugs because initial studies conducted in humans, by definition, lack formal allometric comparison of the pharmacokinetics of absorption, distribution, and elimination parameters", and cites its reference 13 for support. But when you go to that reference, you find that paper's authors concluding with things like this:

The customary use of BSA in dose calculations may contribute to the omission of these factors, give a false sense of accuracy and introduce error. It is questionable whether all current cancer treatment strategies are near optimal, or even ethical. BSA should be used for allometric scaling purposes in phase I clinical trials, as the scaling of toxicity data from animals is important for selecting starting doses in man, but the gradual discontinuation of BSA-based dosing of cytotoxic drugs in clinical practice is seemingly justified.

Citing a paper for support that flatly disagrees with your conclusions gets some points for bravado, but otherwise seems a bit odd. And there are others - that reference that I linked to in the second paragraph above, under "taken some flak", is cited in the FASEB paper as its reference 17, as something to do with choosing between various BSA equations. And it does address that, to be sure, but in the context of wondering whether the whole BSA technique has any clinical validity at all.

This is currently being argued out over at PubPeer, and it should be interesting to see what comes of it. I'll be glad to hear from pharmacokinetics and clinical research folks to see what they make of the whole situation.

Comments (17) + TrackBacks (0) | Category: Pharmacokinetics | The Scientific Literature

January 22, 2014

A New Book on Scaffold Hopping

Email This Entry

Posted by Derek

I've been sent a copy of Scaffold Hopping in Medicinal Chemistry, a new volume from Wiley, edited by Nathan Brown of the Institute of Cancer Research in London. There are eighteen chapters - five on identifying and characterizing scaffolds to start with, ten on various computational approaches to scaffold-hopping, and three case histories.

One of the things you realize quickly when you starting thinking about (or reading about) that topic is that scaffolds are in the eye of the beholder, and that's what those first chapters are trying to come to grips with. Figuring out the "maximum common substructure" of a large group of analogs, for example, is not an easy problem at all, certainly not by eyeballing, and not through computational means, either (it's not solvable in polynomial time, if we want to get formal about it). One chemist will look at a pile of compounds and say "Oh yeah, the isoxazoles from Project XYZ", while someone who hasn't seen them before might say "Hmm, a bunch of amide heterocycles" or "A bunch of heterobiaryls" or what have you.

Another big question is how far you have to move in order to qualify as having hopped to another scaffold. My own preference is strictly empirical: if you've made a change that would be big enough to make most people draw a new Markush structure compared to your current series, you've scaffold-hopped. Ideally, you've kept the activity at your primary target, but changed it in the counterscreens or changed the ADMET properties. That's not to say that all these changes are going to be beneficial - people try this sort of thing all the time and wipe out the primary activity, or pick up even more clearance or hERG than the original series had. But those are the breaks.

And those are the main reasons that people do this sort of thing: to work out of a patent corner, to fix selectivity, or to get better properties. The appeal is that you might be able to address these without jettisoning everything you learned about the SAR of the previous compounds. If this is a topic of interest, especially from the computational angles, this book is certainly worth a look.

Comments (1) + TrackBacks (0) | Category: Drug Development | Patents and IP | Pharmacokinetics

January 14, 2014

A New Metabolism Predictor

Email This Entry

Posted by Derek

Drug metabolism is a perennial topic for us small-molecule people. Watching your lovingly optimized molecules go through the shredding-machine of the liver is an instructive experience, not least when you consider how hard it would be for you to do some of the chemistry that it does. (For reference and getting up to speed on the details, the comments section here has had reader recommendations for the Drug Metabolism and Pharmacokinetics Quick Guide).

Here's a review of a new sites-of-metabolism predictor, FAME, a decision-tree type program that's been trained on data from 20,000 known compounds. It handles both Phase I and Phase II metabolism (a "Pharma 101" entry on that topic is here, for those who'd like to know more), and it looks like it's well worth considering if you're in need for something like this.

Here's my question for the med-chem and PK types: have you made use of predictive metabolism software? Did it save you time, or did you either go down the wrong alleys or not see anything you wouldn't have predicted yourself? I'm interested in real-world experiences, since I haven't had too many myself in this area.

Comments (10) + TrackBacks (0) | Category: In Silico | Pharmacokinetics

November 14, 2013

Nasty Odor as a Drug Side Effect

Email This Entry

Posted by Derek

If you read the publications on the GSK compound (darapladib) that just failed in Phase III, you may notice something odd. These mention "odor" as a side effect in the clinical trial subjects. Say what?

If you look at the structure, there's a para-fluorobenzyl thioether in there, and I've heard that this is apparently not oxidized in vivo (a common fate for sulfides). That sends potentially smelly parent compound (and other metabolites?) into general circulation, where it can exit in urine and feces and even show up in things like sweat and breath. Off the top of my head, I can't think of another modern drug that has a severe odor liability. Anyone have examples?

Update: plenty of examples in the comments!

Comments (49) + TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Pharmacokinetics

October 29, 2013

Unraveling An Off-Rate

Email This Entry

Posted by Derek

Medicinal chemists talk a lot more about residence time and off rate than they used to. It's become clear that (at least in some cases) a key part of a drug's action is its kinetic behavior, specifically how quickly it leaves its binding