Corante

About this Author
DBL%20Hendrix%20small.png College chemistry, 1983

Derek Lowe The 2002 Model

Dbl%20new%20portrait%20B%26W.png After 10 years of blogging. . .

Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases. To contact Derek email him directly: derekb.lowe@gmail.com Twitter: Dereklowe

Chemistry and Drug Data: Drugbank
Emolecules
ChemSpider
Chempedia Lab
Synthetic Pages
Organic Chemistry Portal
PubChem
Not Voodoo
DailyMed
Druglib
Clinicaltrials.gov

Chemistry and Pharma Blogs:
Org Prep Daily
The Haystack
Kilomentor
A New Merck, Reviewed
Liberal Arts Chemistry
Electron Pusher
All Things Metathesis
C&E News Blogs
Chemiotics II
Chemical Space
Noel O'Blog
In Vivo Blog
Terra Sigilatta
BBSRC/Douglas Kell
ChemBark
Realizations in Biostatistics
Chemjobber
Pharmalot
ChemSpider Blog
Pharmagossip
Med-Chemist
Organic Chem - Education & Industry
Pharma Strategy Blog
No Name No Slogan
Practical Fragments
SimBioSys
The Curious Wavefunction
Natural Product Man
Fragment Literature
Chemistry World Blog
Synthetic Nature
Chemistry Blog
Synthesizing Ideas
Business|Bytes|Genes|Molecules
Eye on FDA
Chemical Forums
Depth-First
Symyx Blog
Sceptical Chymist
Lamentations on Chemistry
Computational Organic Chemistry
Mining Drugs
Henry Rzepa


Science Blogs and News:
Bad Science
The Loom
Uncertain Principles
Fierce Biotech
Blogs for Industry
Omics! Omics!
Young Female Scientist
Notional Slurry
Nobel Intent
SciTech Daily
Science Blog
FuturePundit
Aetiology
Gene Expression (I)
Gene Expression (II)
Sciencebase
Pharyngula
Adventures in Ethics and Science
Transterrestrial Musings
Slashdot Science
Cosmic Variance
Biology News Net


Medical Blogs
DB's Medical Rants
Science-Based Medicine
GruntDoc
Respectful Insolence
Diabetes Mine


Economics and Business
Marginal Revolution
The Volokh Conspiracy
Knowledge Problem


Politics / Current Events
Virginia Postrel
Instapundit
Belmont Club
Mickey Kaus


Belles Lettres
Uncouth Reflections
Arts and Letters Daily

In the Pipeline

Category Archives

« Pharma 101 | Pharmacokinetics | Press Coverage »

July 14, 2014

Modifying Red Blood Cells As Carriers

Email This Entry

Posted by Derek

What's the best carrier to take some sort of therapeutic agent into the bloodstream? That's often a tricky question to work out in animal models or in the clinic - there are a lot of possibilities. But what about using red blood cells themselves?

That idea has been in the works for a few years now, but there's a recent paper in PNAS reporting on more progress (here's a press release). Many drug discovery scientists will have encountered the occasional compound that partitions into erythrocytes all by itself (those are usually spotted by their oddly long half-lives after in vivo dosing, mimicking the effect of plasma protein binding). One of the early ways that people have attempted to try this deliberately was forcing a compound into the cells, but this tends to damage them and make them quite a bit less useful. A potentially more controllable method would be to modify the surfaces of the RBCs themselves to serve as drug carriers, but that's quite a bit more complex, too. Antibodies have been tried for this, but with mixed success.

That's what this latest paper addresses. The authors (the Lodish and Ploegh groups at Whitehead/MIT) introduce modified surface proteins (such as glycophorin A) that are substrates for Ploegh's sortase technology (two recent overview papers), which allows for a wide variety of labeling.

Experiments using modified fetal cells in irradiated mice gave animals that had up to 50% of their RBCs modified in this way. Sortase modification of these was about 85% effective, so plenty of label can be introduced. The labeling process doesn't appear to affect the viability of the cells very much as compared to wild-type - the cells were shown to circulate for weeks, which certainly breaks the records held by the other modified-RBC methods.

The team attached either biotin tags and specific antibodies to both mouse and human RBCs, which would appear to clear the way for a variety of very interesting experiments. (They also showed that simultaneous C- and N-terminal labeling is feasible, to put on two different tags at once). Here's the "coming attractions" section of the paper:

he approach presented here has many other possible applications; the wide variety of possible payloads, ranging from proteins and peptides to synthetic compounds and fluorescent probes, may serve as a guide. We have conjugated a single-domain antibody to the RBC surface with full retention of binding specificity, thus enabling the modified RBCs to be targeted to a specific cell type. We envision that sortase-engineered cells could be combined with established protocols of small-molecule encapsulation. In this scenario, engineered RBCs loaded with a therapeutic agent in the cytosol and modified on the surface with a cell type-specific recognition module could be used to deliver payloads to a precise tissue or location in the body. We also have demonstrated the attachment of two different functional probes to the surface of RBCs, exploiting the subtly different recognition specificities of two distinct sortases. Therefore it should be possible to attach both a therapeutic moiety and a targeting module to the RBC surface and thus direct the engineered RBCs to tumors or other diseased cells. Conjugation of an imaging probe (i.e., a radioisotope), together with such a targeting moiety also could be used for diagnostic purposes.

This will be worth keeping an eye on, for sure, both as a new delivery method for small (and not-so-small) molecules, fof biologics, and for its application to all the immunological work going on now in oncology. This should keep everyone involved busy for some time to come!

Comments (7) + TrackBacks (0) | Category: Biological News | Chemical Biology | Pharmacokinetics

June 2, 2014

Single-Cell Compound Measurements - Now In A Real Animal

Email This Entry

Posted by Derek

parp%20cells%20copy.jpg
Last year I mentioned an interesting paper that managed to do single-cell pharmacokinetics on olaparib, a poly(ADP) ribose polymerase 1 (PARP1) inhibitor. A fluorescently-tagged version of the drug could be spotted moving into cells and even accumulating in the nucleus. The usual warnings apply: adding a fluorescent tag can disturb the various molecular properties that you're trying to study in the first place. But the paper did a good set of control experiments to try to get around that problem, and this is still the only way known (for now) to get such data.

The authors are back with a follow-up paper that provides even more detail. They're using fluorescence polarization/fluorescence anisotropy microscopy. That can be a tricky technique, but done right, it provides a lot of information. The idea (as the assay-development people in the audience well know) is that when fluorescent molecules are excited by polarized light, their emission is affected by how fast they're rotating. If the rotation is slowed down to below the fluorescence lifetime of the molecules (as happens when they're bound to a protein), then you see more polarization in the emitted light, but if the molecules are tumbling around freely, that's mostly lost. There are numerous complications - you need to standardize each new system according to how much things change in increasingly viscous solutions, the fluorophores can't get too close together, you have to be careful with the field of view in your imaging system to avoid artifacts - but that's the short form.

In this case, they're using near-IR light to do the excitation, because those wavelengths are well known to penetrate living cells well. Their system also needs two photons to excite each molecule, which improves signal-to-noise and the two-photon dye is a BODIPY compound. These things have been used in fluorescence studies with wild abandon for the past few years - at one point, I was beginning to think that the acronym was a requirement to get a paper published in Chem. Comm. They have a lot of qualities (cell penetration, fluorescence lifetime, etc.) that make them excellent candidates for this kind of work.

This is the same olaparib/BODIPY hybrid used in the paper last year, and you see the results. The green fluorescence is nonspecific binding, while the red is localized to the nuclei, and doesn't wash out. If you soak the cells with unlabeled olaparib beforehand, though, you don't see this effect at all, which also argues for the PARP1-bound interpretation of these results. This paper takes things even further, though - after validating this in cultured cells, they moved on to live mice, using an implanted window chamber over a xenograft.

And they saw the same pattern: quick cellular uptake of the labeled drug on infusion into the mice, followed by rapid binding to nuclear PARP1. The intracellular fluorescence then cleared out over a half-hour period, but the nuclear-bound compound remained, and could be observed with good signal/noise. This is the first time I've seen an experiment like this. Although it's admittedly a special case (which takes advantage of a well-behaved fluorescently labeled drug conjugate, to name one big hurdle), it's a well-realized proof of concept. Anything that increases the chances of understanding what's going on with small molecules in real living systems is worth paying attention to. It's interesting to note, by the way, that the olaparib/PARP1 system was also studied in that recent whole-cell thermal shift assay technique, which does not need modified compounds. Bring on the comparisons! These two techniques can be used to validate each other, and we'll all be better off.

Comments (4) + TrackBacks (0) | Category: Biological News | Chemical Biology | Pharmacokinetics

March 25, 2014

A New Way to Study Hepatotoxicity

Email This Entry

Posted by Derek

Every medicinal chemist fears and respects the liver. That's where our drugs go to die, or at least to be severely tested by that organ's array of powerful metabolizing enzymes. Getting a read on a drug candidate's hepatic stability is a crucial part of drug development, but there's an ever bigger prize out there: predicting outright liver toxicity. That, when it happens, is very bad news indeed, and can torpedo a clinical compound that seemed to be doing just fine - up until then.

Unfortunately, getting a handle on liver tox has been difficult, even with such strong motivation. It's a tough problem. And given that most drugs are not hepatotoxic, most of the time, any new assay that overpredicts liver tox might be even worse than no assay at all. There's a paper in the latest Nature Biotechnology, though, that looks promising.

What the authors (from Stanford and Toronto) are doing is trying to step back to the early mechanism of liver damage. One hypothesis has been that the production of reactive oxygen species (ROS) inside hepatic cells is the initial signal of trouble. ROS are known to damage biomolecules, of course. But more subtly, they're also known to be involved in a number of pathways used to sense that cellular damage (and in that capacity, seem to be key players in inducing the beneficial effects of exercise, among other things). Aerobic cells have had to deal with the downsides of oxygen for so long that they've learned to make the most of it.
isoniazid%20image.png
This work (building on some previous studies from the same group) uses polymeric nanoparticles. They're semiconductors, and hooked up to be part of a fluorescence or chemiluminescence readout. (They use FRET for peroxynitrite and hypochlorite detection, more indicative of mitochondrial toxicity, and CRET for hydrogen peroxide, more indicative of Phase I metabolic toxicity). The particles are galactosylated to send them towards the liver cells in vivo, confirmed by necropsy and by confocal imaging. The assay system seemed to work well by itself, and in mouse serum, so they dosed it into mice and looked for what happened when the animals were given toxic doses of either acetominophen or isoniazid (both well-known hepatotox compounds at high levels). And it seems to work pretty well - they could image both the fluorescence and the chemiluminescence across a time course, and the dose/responses make sense. It looks like they're picking up nanomolar to micromolar levels of reactive species. They could also show the expected rescue of the acetominophen toxicity with some known agents (like GSH), but could also see differences between them, both in the magnitude of the effects and their time courses as well.

The chemiluminescent detection has been done before, as has the FRET one, but this one seems to be more convenient to dose, and having both ROS detection systems going at once is nice, too. One hopes that this sort of thing really can provide a way to get a solid in vivo read on hepatotoxicity, because we sure need one. Toxicologists tend to be a conservative bunch, with good reason, so don't look for this to revolutionize the field by the end of the year or anything. But there's a lot of promise here.

There are some things to look out for, though. For one, since these are necessarily being done in rodents, there will be differences in metabolism that will have to be taken into account, and some of those can be rather large. Not everything that injures a mouse liver will do so in humans, and vice versa. It's also worth remembering that hepatotoxicity is also a major problem with marketed drugs. That's going to be a much tougher problem to deal with, because some of these cases are due to overdose, some to drug-drug interactions, some to drug-alcohol interactions, and some to factors that no one's been able to pin down. One hopes, though, that if more drugs come through that show a clean liver profile that these problems might ameliorate a bit.

Comments (13) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics | Toxicology

March 21, 2014

Dosing by Body Surface Area

Email This Entry

Posted by Derek

We were talking about allometry around here the other day, which prompts me to mention this paper. It used the reports of resveratrol dosing in animals, crudely extrapolated to humans, to argue that the body surface area normalization (BSA) method was a superior technique for dose estimation across species.

Over the years, though, the BSA method has taken some flak in the literature. It's most widely used in oncology, especially with cytotoxics, but there have been calls to move away from the practice, calling it a relic with little scientific foundation. (The rise of a very obese patient population has also led to controversy about whether body weight or surface area is a more appropriate dose-estimation method in those situations). At the same time, it's proven useful in some other situations, so it can't be completely ignored.

But it seems that the FASEB paper referenced in the first paragraph, which has been cited hundreds of times since 2008, may be overstating its conclusions. For example, it says that "BSA normalization of doses must be used to determine safe starting doses of new drugs because initial studies conducted in humans, by definition, lack formal allometric comparison of the pharmacokinetics of absorption, distribution, and elimination parameters", and cites its reference 13 for support. But when you go to that reference, you find that paper's authors concluding with things like this:

The customary use of BSA in dose calculations may contribute to the omission of these factors, give a false sense of accuracy and introduce error. It is questionable whether all current cancer treatment strategies are near optimal, or even ethical. BSA should be used for allometric scaling purposes in phase I clinical trials, as the scaling of toxicity data from animals is important for selecting starting doses in man, but the gradual discontinuation of BSA-based dosing of cytotoxic drugs in clinical practice is seemingly justified.

Citing a paper for support that flatly disagrees with your conclusions gets some points for bravado, but otherwise seems a bit odd. And there are others - that reference that I linked to in the second paragraph above, under "taken some flak", is cited in the FASEB paper as its reference 17, as something to do with choosing between various BSA equations. And it does address that, to be sure, but in the context of wondering whether the whole BSA technique has any clinical validity at all.

This is currently being argued out over at PubPeer, and it should be interesting to see what comes of it. I'll be glad to hear from pharmacokinetics and clinical research folks to see what they make of the whole situation.

Comments (17) + TrackBacks (0) | Category: Pharmacokinetics | The Scientific Literature

January 22, 2014

A New Book on Scaffold Hopping

Email This Entry

Posted by Derek

I've been sent a copy of Scaffold Hopping in Medicinal Chemistry, a new volume from Wiley, edited by Nathan Brown of the Institute of Cancer Research in London. There are eighteen chapters - five on identifying and characterizing scaffolds to start with, ten on various computational approaches to scaffold-hopping, and three case histories.

One of the things you realize quickly when you starting thinking about (or reading about) that topic is that scaffolds are in the eye of the beholder, and that's what those first chapters are trying to come to grips with. Figuring out the "maximum common substructure" of a large group of analogs, for example, is not an easy problem at all, certainly not by eyeballing, and not through computational means, either (it's not solvable in polynomial time, if we want to get formal about it). One chemist will look at a pile of compounds and say "Oh yeah, the isoxazoles from Project XYZ", while someone who hasn't seen them before might say "Hmm, a bunch of amide heterocycles" or "A bunch of heterobiaryls" or what have you.

Another big question is how far you have to move in order to qualify as having hopped to another scaffold. My own preference is strictly empirical: if you've made a change that would be big enough to make most people draw a new Markush structure compared to your current series, you've scaffold-hopped. Ideally, you've kept the activity at your primary target, but changed it in the counterscreens or changed the ADMET properties. That's not to say that all these changes are going to be beneficial - people try this sort of thing all the time and wipe out the primary activity, or pick up even more clearance or hERG than the original series had. But those are the breaks.

And those are the main reasons that people do this sort of thing: to work out of a patent corner, to fix selectivity, or to get better properties. The appeal is that you might be able to address these without jettisoning everything you learned about the SAR of the previous compounds. If this is a topic of interest, especially from the computational angles, this book is certainly worth a look.

Comments (1) + TrackBacks (0) | Category: Drug Development | Patents and IP | Pharmacokinetics

January 14, 2014

A New Metabolism Predictor

Email This Entry

Posted by Derek

Drug metabolism is a perennial topic for us small-molecule people. Watching your lovingly optimized molecules go through the shredding-machine of the liver is an instructive experience, not least when you consider how hard it would be for you to do some of the chemistry that it does. (For reference and getting up to speed on the details, the comments section here has had reader recommendations for the Drug Metabolism and Pharmacokinetics Quick Guide).

Here's a review of a new sites-of-metabolism predictor, FAME, a decision-tree type program that's been trained on data from 20,000 known compounds. It handles both Phase I and Phase II metabolism (a "Pharma 101" entry on that topic is here, for those who'd like to know more), and it looks like it's well worth considering if you're in need for something like this.

Here's my question for the med-chem and PK types: have you made use of predictive metabolism software? Did it save you time, or did you either go down the wrong alleys or not see anything you wouldn't have predicted yourself? I'm interested in real-world experiences, since I haven't had too many myself in this area.

Comments (10) + TrackBacks (0) | Category: In Silico | Pharmacokinetics

November 14, 2013

Nasty Odor as a Drug Side Effect

Email This Entry

Posted by Derek

If you read the publications on the GSK compound (darapladib) that just failed in Phase III, you may notice something odd. These mention "odor" as a side effect in the clinical trial subjects. Say what?

If you look at the structure, there's a para-fluorobenzyl thioether in there, and I've heard that this is apparently not oxidized in vivo (a common fate for sulfides). That sends potentially smelly parent compound (and other metabolites?) into general circulation, where it can exit in urine and feces and even show up in things like sweat and breath. Off the top of my head, I can't think of another modern drug that has a severe odor liability. Anyone have examples?

Update: plenty of examples in the comments!

Comments (49) + TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Pharmacokinetics

October 29, 2013

Unraveling An Off-Rate

Email This Entry

Posted by Derek

Medicinal chemists talk a lot more about residence time and off rate than they used to. It's become clear that (at least in some cases) a key part of a drug's action is its kinetic behavior, specifically how quickly it leaves its binding site. You'd think that this would correlate well with its potency, but that's not necessarily so. Binding constants are a mix of on- and off-rates, and you can get to the same number by a variety of different means. Only if you're looking at very similar compounds with the same binding modes can you expect the correlation your intuition is telling you about, and even then you don't always get it.

There's a new paper in J. Med. Chem. from a team at Boehringer Ingelheim that takes a detailed look at this effect. The authors are working out the binding to the muscarinic receptor ligand tiotropium, which has been around a long time. (Boehringer's efforts in the muscarinic field have been around a long time, too, come to think of it). Tiotropium binds to the m2 subtype with a Ki of 0.2 nM, and to the m3 subtype with a Ki of 0.1 nM. But the compound has a much slower off rate on the m3 subtype, enough to make it physiologically distinct as an m3 ligand. Tiotropium is better known by its brand name Spiriva, and if its functional selectivity at the m3 receptors in the lungs wasn't pretty tight, it wouldn't be a drug. By carefully modifying its structure and introducing mutations into the receptor, this group hoped to figure out just why it's able to work the way it does.
tiotropium.png
The static details of tiotropium binding are well worked out - in fact, there's a recent X-ray structure, adding to the list of GPCRs that have been investigated by X-ray crystallography. There are plenty of interactions, as those binding constants would suggest:

The orthosteric binding sites of hM3R and hM2R are virtually identical. The positively charged headgroup of the antimuscarinic agent binds to (in the class of amine receptors highly conserved) Asp3.32 (D1483.32) and is surrounded by an aromatic cage consisting of Y1493.33, W5046.48, Y5076.51, Y5307.39, and Y5347.43. In addition to that, the aromatic substructures of the ligands dig into a hydrophobic region close to W2004.57 and the hydroxy groups, together with the ester groups, are bidentally interacting with N5086.52, forming close to optimal double hydrogen bonds. . .

The similarity of these binding sites was brought home to me personally when I was working on making selective antagonists of these myself. (If you want a real challenge, try differentiating m2 and m4). The authors point out, though, and crucially, that if you want to understand how different compounds bind to these receptors, the static pictures you get from X-ray structures are not enough. Homology modeling helps a good deal, but only if you take its results as indicators of dynamic processes, and not just swapping out residues in a framework.

Doing point-by-point single changes in both the tiotropium structure and the the receptor residues lets you use the kinetic data to your advantage. Such similar compounds should have similar modes of dissociation from the binding site. You can then compare off-rates to the binding constants, looking for the ones that deviate from the expected linear relationship. What they find is that the first event when tiotropium leaves the binding site is the opening of the aromatic cage mentioned above. Mutating any of these residues led to a big effect on the off-rate compared to the effect on the binding constant. Mutations further up along the tunnel leading to the binding site behaved in the same way: pretty much identical Ki values, but enhanced off-rates.

These observations, the paper says with commendable honesty, don't help the medicinal chemists all that much in designing compounds with better kinetics. You can imagine finding a compound that takes better advantage of this binding (maybe), but you can also imagine spending a lot of time trying to do that. The interaction with the asapragine at residue 508 is more useful from a drug design standpoint:

Our data provide evidence that the double hydrogen interaction of N5086.52 with tiotropium has a crucial influence on the off-rates beyond its influence on Ki. Mutation of N5086.52 to alanine accelerates the dissociation of tiotropium more than 1 order of magnitude than suggested by the Ki. Consequently, tiotropium derivatives devoid of the interacting hydroxy group show overproportionally short half-lives. Microsecond MD simulations show that this double hydrogen bonded interaction hinders tiotropium from moving into the exit channel by reducing the frequency of tyrosine-lid opening movements. Taken together, our data show that the interaction with N5086.52 is indeed an essential prerequisite for the development of slowly dissociating muscarinic receptor inverse agonists. This hypothesis is corroborated by the a posteriori observation that the only highly conserved substructure of all long-acting antimuscarinic agents currently in clinical development or already on the market is the hydroxy group.

But the extracellular loops also get into the act. The m2 subtype's nearby loop seems to be more flexible than the one in m3, and there's a lysine in the m3 that probably contributes some electrostatic repulsion to the charged tiotropium as it tries to back out of the protein. That's another effect that's hard to take advantage of, since the charged region of the molecule is a key for binding down in the active site, and messing with it would probably not pay dividends.

But there are some good take-aways from this paper. The authors note that the X-ray structure, while valuable, seems to have large confirmed the data generated by mutagenesis (as well it should). So if you're willing to do lots of point mutations, on both your ligand and your protein, you can (in theory) work some of these fine details out. Molecular dynamics simulations would seem to be of help here, too, also in theory. I'd be interested to hear if people can corroborate that with real-world experience.

Comments (20) + TrackBacks (0) | Category: Drug Assays | In Silico | Pharmacokinetics | The Central Nervous System

September 25, 2013

Sugammadex's Problems: Is the Merck/Schering-Plough Deal the Worst?

Email This Entry

Posted by Derek

That didn't take long. Just a few days after Roger Perlmutter at Merck had praised the team that developed Bridon (sugammadex), the FDA turned it down for the second time. The FDA seems to be worried about hypersensitivity reactions to the drug - that was the grounds on which they rejected it in 2008. Merck ran another study to address this, but the agency apparently is now concerned about how that trial was run. What we know, according to FiercePharma, is that they "needed to assess an inspection of a clinical trial site conducting the hypersensitivity study". Frustratingly for Merck, their application was approved in the EU back in that 2008 submission period.
Sugammadex_encaps_rocuronium.jpg
It's an odd compound, and it had a nomination in the "Ugliest Drug Candidate" competition I had here a while back. That's because it works by a very unusual mechanism. It's there to reverse the effects of rocuronium, a neuromuscular blockade agent used in anaesthesia. Sugammadex is a cyclodextrin derivative, a big cyclic polysaccharide of the sort that have been used to encapsulate many compounds in their central cavities. It's the mechanism behind the odor-controlling Febreze spray - interestingly, I've read that when that product was introduced, its original formulation failed in the market because it had no scent of its own, and consumers weren't ready for something with no smell that nonetheless decreased other odors). The illustration is from the Wikipedia article on sugammadex, and it shows very well how it's designed to bind rocuronium tightly in a way that it can no longer at at the acetylcholine receptor. Hats off to the Organon folks in Scotland who thought of this - pity that all of them must be long gone, isn't it?

You see, this is one of the drugs from Schering-Plough that Merck took up when they bought the company, but it was one of the compounds from Organon that Schering-Plough took up when they bought them. (How much patent life can this thing have left by now?) By the way, does anyone still remember the ridiculous setup by which Schering-Plough was supposed to be taking over Merck? Did all that maneuvering accomplish anything at all in the end? At any rate, Merck really doesn't seem to have gotten a lot out of the deal, and this latest rejection doesn't make it look any better. Not all of those problems were (or could have been) evident at the time, but enough of them were to make a person wonder. I'm willing to nominate it as "Most Pointless Big Pharma Merger", and would be glad to hear the case for other contenders.

Comments (29) + TrackBacks (0) | Category: Business and Markets | Clinical Trials | Pharmacokinetics | Regulatory Affairs | Toxicology

August 22, 2013

Too Many Metrics

Email This Entry

Posted by Derek

Here's a new paper from Michael Shultz of Novartis, who is trying to cut through the mass of metrics for new compounds. I cannot resist quoting his opening paragraph, but I do not have a spare two hours to add all the links:

Approximately 15 years ago Lipinski et al. published their seminal work linking molecular properties with oral absorption.1 Since this ‘Big Bang’ of physical property analysis, the universe of parameters, rules and optimization metrics has been expanding at an ever increasing rate (Figure 1).2 Relationships with molecular weight (MW), lipophilicity,3 and 4 ionization state,5 pKa, molecular volume and total polar surface area have been examined.6 Aromatic rings,7 and 8 oxygen atoms, nitrogen atoms, sp3 carbon atoms,9 chiral atoms,9 non-hydrogen atoms, aromatic versus non-hydrogen atoms,10 aromatic atoms minus sp3 carbon atoms,6 and 11 hydrogen bond donors, hydrogen bond acceptors and rotatable bonds12 have been counted and correlated.13 In addition to the rules of five came the rules of 4/40014 and 3/75.15 Medicinal chemists can choose from composite parameters (or efficiency indices) such as ligand efficiency (LE),16 group efficiency (GE), lipophilic efficiency/lipophilic ligand efficiency (LipE17/LLE),18 ligand lipophilicity index (LLEAT),19 ligand efficiency dependent lipophilicity (LELP), fit quality scaled ligand efficiency (LE_scale),20 percentage efficiency index (PEI),21 size independent ligand efficiency (SILE), binding efficiency index (BEI) or surface binding efficiency index (SEI)22 and composite parameters are even now being used in combination.23 Efficiency of binding kinetics has recently been introduced.24 A new trend of anthropomorphizing molecular optimization has occurred as molecular ‘addictions’ and ‘obesity’ have been identified.25 To help medicinal chemists there are guideposts,21 rules of thumb,14 and 26 a property forecast index,27 graphical representations of properties28 such as efficiency maps, atlases,29 ChemGPS,30 traffic lights,31 radar plots,32 Craig plots,33 flower plots,34 egg plots,35 time series plots,36 oral bioavailability graphs,37 face diagrams,28 spider diagrams,38 the golden triangle39 and the golden ratio.40

He must have enjoyed writing that one, if not tracking down all the references. This paper is valuable right from the start just for having gathered all this into one place! But as you read on, you find that he's not too happy with many of these metrics - and since there's no way that they can all be equally correct, or equally useful, he sets himself the task of figuring out which ones we can discard. The last reference in the quoted section below is to the famous "Can a biologist fix a radio?" paper:

While individual composite parameters have been developed to address specific relationships between properties and structural features (e.g. solubility and aromatic ring count) the benefit may be outweighed by the contradictions that arise from utilizing several indices at once or the complexity of adopting and abandoning various metrics depending on the stage of molecular optimization. The average medicinal chemist can be overwhelmed by the ‘analysis fatigue’ that this plethora of new and contradictory tools, rules and visualizations now provide, especially when combined with the increasing number of safety, off-target, physicochemical property and ADME data acquired during optimization efforts. Decision making is impeded when evaluating information that is wrong or excessive and thus should be limited to the absolute minimum and most relevant available.

As Lazebnik described, sometimes the more facts we learn, the less we understand.

And he discards quite a few. All the equations that involve taking the log of potency and dividing by the heavy atom count (HAC), etc., are playing rather loose with the math:

To be valid, LE must remain constant for each heavy atom that changes potency 10-fold. This is not the case as a 15 HAC compound with a pIC50 of 3 does not have the same LE as a 16 HAC compound with a pIC50 of 4 (ΔpIC50 = 1, ΔHAC = 1, ΔLE = 0.07). A 10-fold change in potency per heavy atom does not result in constant LE as defined by Hopkins, nor will it result in a constant SILE, FQ or LLEAT values. These metrics do not mathematically normalize size or potency because they violate the quotient rule of logarithms. To obey this rule and be a valid mathematical function HAC would subtracted from pIC50 and rendered independent of size and reference potency.

Note that he's not recommending that last operation as a guideline, either. Another conceptual problem with plain heavy atom counting is that it treats all atoms the same, but that's clearly an oversimplification. But dividing by some form of molecular weight is an oversimplification, too: a nitrogen differs from an oxygen by a lot more than that 1 mass unit. (This topic came up here a little while back). But oversimplified or not - heck, mathematically valid or not - the question is whether these things help out enough when used as metrics in the real world. And Shultz would argue that they don't. Keeping LE the same (or even raising it) is supposed to be the sign of a successful optimization, but in practice, LE usually degrades. His take on this is that "Since lower ligand efficiency is indicative of both higher and lower probabilities of success (two mutually exclusive states) LE can be invalidated by not correlating with successful optimization."

I think that's too much of a leap - because successful drug programs have had their LE go down during the process, that doesn't mean that this was a necessary condition, or that they should have been aiming for that. Perhaps things would have been even better if they hadn't gone down (although I realize that arguing from things that didn't happen doesn't have much logical force). Try looking at it this way: a large number of successful drug programs have had someone high up in management trying to kill them along the way, as have (obviously) most of the unsuccessful ones. That would mean that upper management decisions to kill a program are also indicative of both higher and lower probabilities of success, and can thus be invalidated, too. Actually, he might be on to something there.

Shultz, though, finds that he's not able to invalidate LipE (or LLE), variously known as ligand-lipophilicity efficiency or lipophilic ligand efficiency. That's p(IC50) - logP, which at least follows the way that logarithms of quotients are supposed to work. And it also has been shown to improve during known drug optimization campaigns. The paper has a thought experiment, on some hypothetical compounds, as well as some data from a tankyrase inhibitor series that seem to show the LipE behave more rationally than other metrics (which sometimes start pointing in opposite directions).

I found the chart below to be quite interesting. It uses the cLogP data from Paul Leeson and Brian Springthorpe's original LLE paper (linked in the above paragraph) to show what change in potency you would expect when you change a hydrogen in your molecule to one of the groups shown if you're going to maintain a constant LipE value. So while hydrophobic groups tend to make things more potent, this puts a number on it. A t-butyl, for example, should make things about 50-fold more potent if it's going to pull its weight as a ball of grease. (Note that we're not talking about effects on PK and tox here, just sheer potency - if you play this game, though, you'd better be prepared to keep an eye on things downstream).
LipE%20chart.png
On the other end of the scale, a methoxy should, in theory, cut your potency roughly in half. If it doesn't, that's a good sign. A morpholine should be three or four times worse, and if it isn't, then it's found something at least marginally useful to do in your compound's binding site. What we're measuring here is the partitioning between your compound wanting to be in solution, and wanting to be in the binding site. More specifically, since logP is in the equation, we're looking at the difference in the partitioning of your compound between octanol and water, versus its partitioning between the target protein and water. I think we can all agree that we'd rather have compounds that bind because they like something about the active site, rather than just fleeing the solution phase.

So in light of this paper, I'm rethinking my ligand-efficiency metrics. I'm still grappling with how LipE performs down at the fragment end of the molecular weight scale, and would be glad to hear thoughts on that. But Shultz's paper, if it can get us to toss out a lot of the proposed metrics already in the literature, will have done us all a service.

Comments (38) + TrackBacks (0) | Category: Drug Assays | Drug Development | In Silico | Pharmacokinetics

June 10, 2013

Deuterated Drugs: An Obvious Idea?

Email This Entry

Posted by Derek

Nature Medicine has an update on the deuterated drug landscape. There are several compounds in the clinic, and the time to the first marketed deuterium-containing drug is surely counting down.

But, as mentioned at the end of that piece, another countdown that also must be ticking away is the one to the first lawsuit. There are several places where one could be fought out. The deuterated-drug landscape was the subject of a vigorous early land rush, and there are surely overlapping claims out there which will have to be sorted out if (when) the money starts to flow from the idea. And there's the whole problem of obviousness, a key patent-killer. The tricky thing is, standards of what is obvious to one skilled in the art change over time. They have to change; the art changes. (I'll risk some more gritted teeth among the readership by breaking into Latin again: Tempora mutantur, nos et mutamur in illis.

We've already seen this with respect to single enantiomers - it's now considered obvious to resolve a racemic mixture, an to expect that the two isomers will have different activities as pharmaceuticals. At what point will it be considered obvious that deuteration can improve the pharmacokinetics? If that does ever happen, it'll take longer, because deuteration is not as simple a process as resolution of a racemate. Itt can be difficlut (and, well, non-obvious) to figure out where to put the deuteriums for maximum effect, and how many need to be added. Adding them is not always so easy, either, which brings up questions of enablement and reduction to practice. You need to teach toward the compounds you want to claim, and for deuteration, that's going to mean getting pretty specific.

There's another consideration that I hadn't been aware of until this weekend. I had the chance to talk with a patent attorney at a social gathering (not everyone's idea of a big Saturday night, admittedly, but I enjoyed the whole affair). He was explaining to me a consequence of the Supreme Court's recent ruling on obviousness, the 2007 KSR v. Teleflex decision. Apparently, one of the major effects of that ruling was the idea that if there are a limited number of known options for an inventor to choose from, that can take the whole thing into the realm of the obvious. The actual language is that when ". . .there is a design need or market pressure to solve a problem and there are a finite number of identified, predictable solutions, a person of ordinary skill has good reason to pursue the known options within his or her technical grasp. . .the fact that a combination was obvious to try might show that it was obvious under § 103". You can see the PTO itself trying to come to grips with KSR here, and it seems to be very heavily cited indeed by examiners (and in subsequent court cases).

Naturally, as with legal matters, the big question becomes exactly what a limited number of options might mean. How many, exactly, is that? In the case of a racemate, you have two (only two, always two), and it's certainly reasonable to expect them to be different in vivo. So that would come under the KSR principle, I'd say, and it's not just me. But what if there are a limited number of places that a deuterium can be added to a molecule? At what point does deuterating them become, well, just one of those things that a person skilled in the art would know to try?

Expect a court case on this eventually, when some serious money starts to be made in the area. This is going to be fought out case by case, and it's going to take quite a while.

Comments (29) + TrackBacks (0) | Category: Patents and IP | Pharmacokinetics

May 7, 2013

An Update on Deuterium Drugs

Email This Entry

Posted by Derek

In case you're wondering how the deuterated-drugs idea is coming along, the answer seems to be "just fine", at least for Concert Pharamaceuticals. They've announced their third collaboration inside of a year, this time with Celgene.

And they've got their own compound in development, CTP-499, in Phase II for diabetic nephropathy. That's a deutero analog of HDX (1-((S)-5-hydroxyhexyl)-3,7-dimethylxanthine), which is an active metabolite of the known xanthine drug pentoxifylline (which has also been investigated in diabetic kidney disease). You'd assume that deuteration makes this metabolite hang around longer, rather than being excreted, which is just the sort of profile shift that Concert is targeting.

Long-term, the deuteration idea has now diffused out into the general drug discovery world, and there will be no more easy pickings for it (well, at least not so many, depending on how competently patents are drafted). But if Concert can make a success out of what they have going already, they're already set for a longer term than most startups.

Comments (15) + TrackBacks (0) | Category: Pharmacokinetics

April 24, 2013

Watching PARP1 Inhibitors Fail To Work, Cell By Cell

Email This Entry

Posted by Derek

Here's something that's been sort of a dream of medicinal chemists and pharmacologists, and now can begin to be realized: single-cell pharmacokinetics. For those outside the field, you should know that we spend a lot of time on our drug candidates, evaluating whether they're actually getting to where we want them to. And there's a lot to unpack in that statement: the compound (if it's an oral dose) has to get out of the gut and into the bloodstream, survive the versatile shredding machine of the liver (which is where all the blood from from the gut goes first), and get out into the general circulation.

But all destinations are not equal. Tissues with greater blood flow are always going to see more of any compound, for starters. Compounds can (and often do) stick to various blood components preferentially (albumin, red blood cells themselves, etc.), and ride around that way, which can be beneficial, problematic, or a complete non-issue, depending on how the med-chem gods feel about you that week. The brain is famously protected from the riff-raff in the blood supply, so if you want to get into the CNS, you have more to think about. If your compound is rather greasy, it may find other things it likes to stick to rather than hang around in solution anywhere.

And we haven't even talked about the cellular level yet. Is your target on the outside of the cells, or do you have to get in? If you do, you might find your compounds being pumped right back out. There are ongoing nasty arguments about compounds being pumped in in the first place, too, as opposed to just soaking through the membranes. The inside of a cell is a strange place, too, once you're there. The various organelles and structures all have their own affinities for different sorts of compounds, and if you need to get into the mitochondria or the nucleus, you've got another membrane barrier to cross.
PARP1.jpg
At this point, things really start to get fuzzy. It's only been in recent years that it's been possible to follow the traffic of individual species inside a cell, and it's still not trivial, by any means. Some of the techniques used to do it (fluorescent tags of various kinds) also can disturb the very systems you're trying to study. This latest paper uses such a fluorescent label, so you have to keep that in mind, but it's still quite impressive. The authors took a poly(ADP) ribose polymerase 1 (PARP1) inhibitor (part of a class that has had all kinds of trouble in the clinic, despite a lot of biological rationale), attached a fluorescent tag, and watched in real time as it coursed through the vasculature of a tumor (on a time scale of seconds), soaked out into the intracellular space (minutes), and was taken up into the cells themselves (within an hour). Looking more deeply, they could see the compound accumulating in the nucleus (where PARP1 is located), so all indications are that it really does reach its target, and in sufficient amounts to have an effect.

But since it doesn't, there must be something about PARP1 and tumor biology that we're not quite grasping. Inhibiting DNA repair by this mechanism doesn't seem to be the death blow that we'd hoped for, but we now know that that's the place to figure out the failure of these inhibitors. Blaming some problems of delivery and distribution won't cut it.

Comments (24) + TrackBacks (0) | Category: Cancer | Pharmacokinetics

April 12, 2013

Nano-Drugs: Peaked, Or Maybe Past

Email This Entry

Posted by Derek

Nano-everything has been the rule for several years now, to judge from press releases and poster abstracts. But here's an article in Nature Reviews Drug Discovery that's wondering what, exactly, "nanomedicine" has offered so far:

. . .Indeed, by some quantitative measures, the field is flourishing; over the past decade there has been an explosive growth in associated publications, patents, clinical trials and industry activity. For example, a search of the worldwide patent literature using 'nanoparticle and drug' resulted in over 30,000 hits. . .

New biomedical technologies have often undergone a similar life cycle. Initially, exciting pioneering studies result in a huge surge of enthusiasm in academia and in the commercial arena. Then, some of the problems and limitations inherent in the technology emerge, the initial enthusiasm is deflated, and many players leave the field. A few enthusiasts persist and eventually the technology finds its appropriate place in research as well as in clinical and commercial applications. It seems possible that nanomedicine is now verging on the phase of disillusionment.

That's exactly the cycle, and what's never clear is how steep the peaks and valleys are. Some of those deflationary cycles go so deep as to take out the entire field (which may or may not be rediscovered years later). That's not going to happen with nanoparticle drug delivery, but it's certainly not going to make everyone rich overnight. As the article goes on to detail, these formulations are expensive to make (and have tricky quality control issues), and they're not magic bullets for drug delivery across membranes, either. So far, the record of the ones that have made it to market is mixed:

. . .Addressing these challenges would be strongly justified if major benefits were to accrue to patients. But is this happening? Although we do not know the potential benefits of the nanomedicines currently under development, we can examine the early-generation nanoparticle drugs that entered the clinic in the 1990s and 2000s, such as the liposomal agents Doxil and Ambisome and the protein–drug nanocomplex Abraxane. These agents are far more costly than their parent drugs (doxorubicin, amphotericin B and paclitaxel, respectively). Furthermore, these nanomedicines made their mark in the clinic primarily by reducing toxicity rather than improving efficacy. . .

The big question is whether these toxicity reductions warrant the increased prices, and the answer isn't always obvious. The current generation of nanoformulations are different beasts, in many cases, and it's too early to say how they'll work out in the real world. But if you Google the words "drug nanoparticles revolution", you get page after page of stuff, and clearly not all of it is going to perform as hoped. Funding seems to be cresting (or have crested) for this sort of thing for now, and I think that the whole field will have to prove itself some more before it climbs back up again.

Comments (16) + TrackBacks (0) | Category: Pharmacokinetics

March 13, 2013

Getting Down to Protein-Protein Compounds

Email This Entry

Posted by Derek

Late last year I wrote about a paper that suggested that some "stapled peptides" might not work as well as advertised. I've been meaning to link to this C&E News article on the whole controversy - it's a fine overview of the area.

And that also gives me a chance to mention this review in Nature Chemistry (free full access). It's an excellent look at the entire topic of going after alpha-helix protein-protein interactions with small molecules. Articles like this really give you an appreciation for a good literature review - this information is scattered across the literature, and the authors here (from Leeds) have really done everyone interested in this topic a favor by collecting all of it and putting it into context.

As they say, you really have two choices if you're going after this sort of protein-protein interaction (well, three, if you count chucking the whole business and going to truck-driving school, but that option is not specific to this field). You can make something that's helical itself, so as to present the side chains in what you hope will be the correct orientation, or you can go after some completely different structure that just happens to arrange these groups into the right spots (but has no helical architecture itself).

Neither of these is going to lead to attractive molecules. The authors address this problem near the end of the paper, saying that we may be facing a choice here: make potent inhibitors of protein-protein interactions, or stay within Lipinski-guideline property space. Doing both at the same time just may not be possible. On the evidence so far, I think they're right. How we're going to get such things into cells, though, is a real problem (note this entry last fall on macrocyclic compounds, where the same concern naturally comes up). Since we don't seem to know much about why some compounds make it into cells and some don't, perhaps the way forward (for now) is to find a platform where as many big PPI candidates as possible can be evaluated quickly for activity (both in the relevant protein assay and then in cells). If we can't be smart enough, or not yet, maybe we can go after the problem with brute force.

With enough examples of success, we might be able to get a handle on what's happening. This means, though, that we'll have to generate a lot of complex structures quickly and in great variety, and if that's not a synthetic organic chemistry problem, I'd like to know what is. This is another example of a theme I come back to - that there are many issues in drug discovery that can only be answered by cutting-edge organic chemistry. We should be attacking these and making a case for how valuable the chemical component is, rather than letting ourselves be pigeonholed as a bunch of folks who run Suzuki couplings all day long and who might as well be outsourced to Fiji.

Comments (10) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics

January 31, 2013

In Case You Were Wondering What We Thought About the Liver

Email This Entry

Posted by Derek

liver.jpg
Courtesy of the vital site that is TOC ROFL, I wanted to highlight this graphic from this paper in MedChemComm. I always pictured the liver as sort of a sawmill or shredding machine for our drug candidates, with the hepatic portal vein being the conveyer belt hooked up on the front end. But I have to admit, this is a pretty vivid representation.

Update: See Arr Oh has a few issues - rightly so - with the molecule being munched on. . .

Comments (19) + TrackBacks (0) | Category: Pharmacokinetics

January 23, 2013

Eating A Whole Bunch of Random Compounds

Email This Entry

Posted by Derek

Reader Andy Breuninger, from completely outside the biopharma business, sends along what I think is an interesting question, and one that bears on a number of issues:

A question has been bugging me that I hope you might answer.

My understanding is that a lot of your work comes down to taking a seed molecule and exploring a range of derived molecules using various metrics and tests to estimate how likely they are to be useful drugs.

My question is this: if you took a normal seed molecule and a standard set of modifications, generated a set of derived molecules at random, and ate a reasonable dose of each, what would happen? Would 99% be horribly toxic? Would 99% have no effect? Would their effects be roughly the same or would one give you the hives, another nausea, and a third make your big toe hurt?

His impression of drug discovery is pretty accurate. It very often is just that: taking one or more lead compounds and running variations on them, trying to optimize potency, specificity, blood levels/absorption/clearance, toxicology, and so on. So, what do most of these compounds do in vivo?

My first thought is "Depends on where you start". There are several issues: (1) We tend to have a defined target in mind when we pick a lead compound, or (if it's a phenotypic assay that got us there), we have a defined activity that we've already seen. So things are biased right from the start; we're already looking at a higher chance of biological activity than you'd have by randomly picking something out of a catalog or drawing something on a board.

And the sort of target can make a big difference. There are an awful lot of kinase enzymes, for example, and compounds tend to cross-react with them, at least in the nearby families, unless you take a lot of care to keep that from happening. Compounds for the G-protein coupled biogenic amines receptors tend to do that, too. On the other hand, you have enzymes like the cytochromes and binding sites like the aryl hydrocarbon receptor - these things are evolved to recognize all sorts of structually disparate stuff. So against the right (or wrong!) sort of targets, you could expect to see a wide range of potential side activities, even before hitting the random ones.

(2) Some structural classes have a lot more biological activity than others. A lot of small-molecule drugs, for example, have some sort of basic amine in them. That's an important recognition element for naturally occurring substances, and we've found similar patterns in our own compounds. So something without nitrogens at all, I'd say, has a lower chance of being active in a living organism. (Barry Sharpless seems to agree with this). That's not to say that there aren't plenty of CHO compounds that can do you harm, just that there are proportionally more CHON ones that can.

Past that rough distinction, there are pharmacophores that tend to hit a lot, sometimes to the point that they're better avoided. Others are just the starting points for a lot of interesting and active compounds - piperazines and imidazoles are two cores that come to mind. I'd be willing to bet that a thousand random piperazines would hit more things than a thousand random morpholines (other things being roughly equal, like molecular weight and polarity), and either of them would hit a lot more than a thousand random cyclohexanes.

(3) Properties can make a big difference. The Lipinski Rule-of-Five criteria come in for a lot of bashing around here, but if I were forced to eat a thousand random compounds that fit those cutoffs, versus having the option to eat a thousand random ones that didn't, I sure know which ones I'd dig my spoon into.

And finally, (4): the dose makes the poison. If you go up enough in dose, it's safe to say that you're going to see an in vivo response to almost anything, including plenty of stuff at the supermarket. Similarly, I could almost certainly eat a microgram of any compound we have in our company's files with no ill effect, although I am not motivated to put that idea to the test. Same goes for the time that you're exposed. A lot of compounds are tolerated for single-dose tox but fail at two weeks. Compounds that make it through two weeks don't always make it to six months, and so on.

How closely you look makes the poison, too. We find that out all the time when we do animal studies - a compound that seems to cause no overt effects might be seen, on necropsy, to have affected some internal organs. And one that doesn't seem to have any visible signs on the tissues can still show effects in a full histopathology workup. The same goes for blood work and other analyses; the more you look, the more you'll see. If you get down to gene-chip analysis, looking at expression levels of thousands of proteins, then you'd find that most things at the supermarket would light up. Broccoli, horseradish, grapefruit, garlic and any number of other things would kick a full expression-profiling assay all over the place.

So, back to the question at hand. My thinking is that if you took a typical lead compound and dosed it at a reasonable level, along with a large set of analogs, then you'd probably find that if any of them had overt effects, they would probably have a similar profile (for good or bad) to whatever the most active compound was, just less of it. The others wouldn't be as potent at the target, or wouldn't reach the same blood levels. The chances of finding some noticeable but completely different activity would be lower, but very definitely non-zero, and would be wildly variable depending on the compound class. These effects might well cluster into the usual sorts of reactions that the body has to foreign substances - nausea, dizziness, headache, and the like. Overall, odds are that most of the compounds wouldn't show much, not being potent enough at any given target, or getting high enough blood levels to show something, but that's also highly variable. And if you looked closely enough, you'd probably find that that all did something, at some level.

Just in my own experience, I've seen one compound out of a series of dopamine receptor ligands suddenly turn up as a vasodilator, noticeable because of the "Rudolph the Red-Nosed Rodent" effect (red ears and tail, too). I've also seen compound series where they started crossing the blood-brain barrier more more effectively at some point, which led to a sharp demarcation in the tolerability studies. And I've seen many cases, when we've started looking at broader counterscreens, where the change of one particular functional group completely knocked a compound out of (or into) activity in some side assay. So you can never be sure. . .

Comments (22) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharma 101 | Pharmacokinetics | Toxicology

December 17, 2012

Stapled Peptides Take a Torpedo

Email This Entry

Posted by Derek

I wrote here about "stapled peptides", which are small modified helical proteins. They've had their helices stabilized by good ol' organic synthesis, with artificial molecular bridging between the loops. There are several ways to do this, but they all seem to be directed towards the same end.

That end is something that acts like the original protein at its binding site, but acts more like a small molecule in absorption, metabolism, and distribution. Bridging those two worlds is a very worthwhile goal indeed. We know of hordes of useful proteins, ranging from small hormones to large growth factors, that would be useful drugs if we could dose them without their being cleared quickly (or not making it into the bloodstream in the first place). Oral dosing is the hardest thing to arrange. The gut is a very hostile place for proteins - there's a lot of very highly developed machinery in there devoted to ripping everything apart. Your intestines will not distinguish the live-saving protein ligand you just took from the protein in a burrito, and will act accordingly. And even if you give things intravenously, as is done with the protein drugs that have actually made it to clinical use (insulin, EPO, etc.), getting their half-lives up to standard can be a real challenge.

So the field of chemically modified peptides and proteins is a big one, because the stakes are high. Finding small molecules that modulate protein-protein interactions is quite painful; if we could just skip that part, we'd be having a better time of it in this industry. There's an entire company (Aileron, just down the road from me) working on this idea, and many others besides. So, how's it going?

Well, this new paper will cause you to wonder about that. It's from groups in Australia and at Genentech, (Note: edited for proper credit here) and they get right down to it in the first paragraph:

Stabilized helical peptides are designed to mimic an α-helical structure through a constraint imposed by covalently linking two residues on the same helical face (e.g., residue i with i + 4). “Stapling” the peptide into a preformed helix might be expected to lower the energy barrier for binding by reducing entropic costs, with a concomitant increase in binding affinity. Additionally, stabilizing the peptide may reduce degradation by proteases and, in the case of hydrocarbon linkages, reportedly enhance transport into cells, thereby improving bioavailability and their potential as therapeutic agents. The findings we present here for the stapled BH3 peptide (BimSAHB), however, do not support these claims, particularly in regards to affinity and cell permeability.

They go on to detail their lack of cellular assay success with the reported stapled peptide, and suggest that this is due to lack of cell permeability. And since the non-stapled peptide control was just as effective on artificially permeabilized cells, they did more studies to try to figure out what the point of the whole business is. A detailed binding study showed that the stapled peptide had lower affinity for its targets, with slower on-rates and faster off-rates. X-ray crystallography suggested that the modifying the peptide disrupted several important interactions.

Update: After reading the comments so far, I want to emphasize that this paper, as far as I can see, is using the exact same stapled peptide as was used in the previous work. So this isn't just a case of a new system behaving differently; this seems to be the same system not behaving the way that it was reported to.

The entire "staple a peptide to make it a better version of itself" idea comes in for some criticism, too:

Our findings recapitulate earlier observations that stapling of peptides to enforce helicity does not necessarily impart enhanced binding affinity for target proteins and support the notion that interactions between the staple and target protein may be required for high affinity interactions in some circumstances.19 Thus, the design of stapled peptides should consider how the staple might interact with both the target and the rest of the peptide, and particularly in the latter case whether its introduction might disrupt otherwise stabilizing interactions.

That would be more in line with my own intuition, for what it's worth, which is that making such changes to a peptide helix would turn it into another molecule entirely, rather than (necessarily) making it into an enhanced version of what it was before. Unfortunately, at least in this case, this new molecule doesn't seem to have any advantages over the original, at least in the hands of the Genentech group. This is, as they say, very much in contrast to the earlier reports. How to resolve the discrepancies? And how to factor in that Roche has a deal with Aileron for stapled-peptide technology, and this very article is (partly) from Genentech, now a part of Roche? A great deal of dust has just been stirred up; watching it settle will be interesting. . .

Comments (29) + TrackBacks (0) | Category: Cancer | Chemical Biology | Pharmacokinetics

October 23, 2012

Improving Half-Life

Email This Entry

Posted by Derek

There was a question in the comments from a reader who's picking up med-chem, and I thought it was worth answering out here. I've been meaning to shore up the "Pharma 101" category, and this is a good opportunity. So how, in a case like that compound in the previous post, do you increase a compound's half-life?

The first thing to do is try to figure out why it's so short. That's almost certainly due to the compound being metabolized and excreted - once in a while, you'll find a compound that quietly partitions into some tissue and hides out, but for the most part, a disappearing compound is getting chewed up and spit out. For one that's being injected like this, you'd want to look in the blood for metabolites, and in the urine for those and the parent compound, and try to see how much you can account for. No point in checking feces or the bile contents - if this thing were dosed orally, though, you'd definitely not ignore those possibilities.

Looking for metabolites is something of a black art. There are plenty of standard things to check, like the addition of multiples of 16 (for oxidations). Examination of the structure can give you clues as well. I'd consider what pieces I'd see after cleavage of each of those amide bonds, for example, and look for those (and their oxidation products). The bromine and iodine will help you track things down in the mass spec, for sure. That phenol over on the right-hand side is a candidate for glucuronidation (or some other secondary metabolite), either of the parent or some piece thereof, so you'd want to look for those. Same thing could happen to some of the free acids after cleavage of the amides. And I have no idea what that difluorophosphonate does, but I'd be rooting through the PK literature to find out what such things have done in the past.

If you can establish some major metabolic routes, then you can think about hardening the structure. What if some of those amides are N-methylated, for example? Can you do that without killing the binding? Would putting another atom on the other side of the phenol affect its conjugation? There are all sorts of tricks, mostly involving steric hindrance and/or changing electron density around some hot spot.

Update: a commenter notes that I've left out prodrugs, and that's quite right. A prodrug is a sort of deliberate metabolism. You put in a group that gets slowly cleaved off, liberating the active compound - esters are a favorite strategy of this sort. Much of the time, a prodrug is put on to improve the solubility and/or absorption of a compound (that is, something polar and soluble grafted onto a brick), but they can certainly influence half-life, too.

The other major strategy is formulation. If you really can't shore up your structure, or if that isn't enough, then you can think about some formulation that'll deliver your compound differently. Would some sort of slow-release help? These things are trickier with injectables than they are with oral medications, from what experience I've had, but there are still things that can be done.

So that's a short answer - there are, of course, a lot of details involved, and a lot of tricks that have been developed over the years. But that's one way to start.

Comments (16) + TrackBacks (0) | Category: Pharma 101 | Pharmacokinetics

September 18, 2012

Going After the Big Cyclic Ones

Email This Entry

Posted by Derek

I wrote last year about macrocyclic compounds and their potential as drugs. Now BioCentury has a review of the companies working in this area, and there are more of them than I thought. Ensemble and Aileron are two that come to mind (if you count "stapled peptides" as macrocycles, and I think they should). But there are also Bicycle, Encycle, Lanthio, Oncodesign, Pepscan, PeptiDream, Polyphor, Protagonist, and Transzyme. These companies have a lot of different approaches. Many of them (but not all) are using cyclic peptides, but there are different ways of linking these, different sorts of amino acids you can use in them, and so on. And the non-peptidic approaches have an even wider variety. So I've no doubt that there's room in this area for all these companies - but I also have no doubt that not all these approaches are going to work equally well. And we're just barely getting to the outer fringes of sorting that out:

While much of the excitement over macrocycles is due to their potential to disrupt intracellular protein-protein interactions, every currently disclosed lead program in the space targets an extracellular protein. This reality reflects the challenge of developing a potent and cell-penetrant macrocyclic compound.

Tranzyme and Polyphor are the only companies with macrocyclic compounds in the clinic. Polyphor’s lead compound is POL6326, a conformationally constrained peptide that antagonizes CXC chemokine receptor 4 (CXCR4; NPY3R). It is in Phase II testing to treat multiple myeloma (MM) using autologous transplantation of hematopoietic stem cells.

Tranzyme’s lead compound is TZP-102, an orally administered ghrelin receptor agonist in Phase IIb testing to treat diabetic gastroparesis.

Two weeks ago, Aileron announced it hopes to start clinical development of its lead internally developed program in 2013. The compound, ALRN-5281, targets the growth hormone-releasing hormone (GHRH) receptor.

Early days, then. It's understandable that the first attempts in this area will come via extracellular-acting, iv-administered agents - those are the lowest bars to clear for a new technology. But if this area is going to live up to its potential, it'll have to go much further along than that. We're going to have to learn a lot more about cellular permeability, which is a very large side effect (a "positive externality", as the economists say) of pushing the frontiers back like this: you figure these things out because you have to.

Comments (9) + TrackBacks (0) | Category: Drug Development | Pharmacokinetics

July 19, 2012

Come Back Thiophene; All Is Forgiven

Email This Entry

Posted by Derek

A couple of commenters took exception to my words yesterday about thiophene not being a "real" heterocycle. And I have to say, on reflection, that they're right. When I think about it, I have seen an example myself, in a project some years ago, where thiophene-for-phenyl was not a silent switch. If I recall correctly, the thiophene was surprisingly more potent, and that seems to be the direction that other people have seen as well. Anyone know of an example where a thiophene kills the activity compared to a phenyl?

That said, the great majority of the times I've seen matched pairs of compounds with this change, there's been no real difference in activity. I haven't seen as many PK comparisons, but the ones I can think of have been pretty close. That's not always the case, though: Plavix (clopidogrel) is the canonical example of a thiophene that gets metabolically unzipped (scroll down on that page to "Pharmacokinetics and metabolism" to see the scheme). You're not going to see a phenyl ring do that, of course - it'll get oxidized to the phenol, likely as not, but that'll get glucuronidated or something and sluiced out the kidneys, taking everything else with it. But note also that depending on things like CYP2C19 to produce your active drug for you is not without risks: people vary in their enzyme profiles, and you might find that your blood levels in a real patient population are rather jumpier than you'd hoped for.

So I'll take back my comments: thiophene really is (or at least can be) a heterocycle all its own, and not just a phenyl with eye makeup. But one of the conclusions of that GSK paper was that it's not such a great heterocycle for drug development, in the end.

Comments (16) + TrackBacks (0) | Category: Life in the Drug Labs | Pharmacokinetics

June 13, 2012

Live By The Bricks, Die By The Bricks

Email This Entry

Posted by Derek

I wanted to highlight a couple of recent examples from the literature to show what happens (all too often) when you start to optimize med-chem compounds. The earlier phases of a project tend to drive on potency and selectivity, and the usual way to get these things is to add more stuff to your structures. Then as you start to produce compounds that make it past those important cutoffs, your focus turns more to pharmacokinetics and metabolism, and sometimes you find you've made your life rather difficult. It's an old trap, and a well-known one, but that doesn't stop people from sticking a leg into it.

Take a look at these two structures from ACS Chemical Biology. The starting structure is a pretty generic-looking kinase inhibitor, and as the graphic to its left shows, it does indeed hit a whole slew of kinases. These authors extended the structure out to another loop of the their desired target, c-Src, and as you can see, they now have a much more selective compound.
kinase%20inhibitor.png
But at such a price! Four more aromatic rings, including the dread biphenyl, and only one sp3 carbon in the lot. The compound now tips the scales at MW 555, and looks about as soluble as the Chrysler building. To be fair, this is an academic group, which mean that they're presumably after a tool compound. That's a phrase that's used to excuse a lot of sins, but in this case they do have cellular assay data, which means that despite this compound's properties, it's managing to do something. Update: see this comment from the author on this very point. Be warned, though, if you're in drug discovery and you follow this strategy. Adding four flat rings and running up the molecular weight might work for you, but most of the time it will only lead to trouble - pharmacokinetics, metabolic clearance, toxicity, formulation.

My second example is from a drug discovery group (Janssen). They report work on a series of gamma-secretase modulators (GSMs) for Alzheimer's. You can tell from the paper that they had quite a wild ride with these things - for one, the activity in their mouse model didn't seem to correlate at all with the concentration of the compounds in the brain. Looking at those structures, though, you have to think that trouble is lurking, and so it is.
secretase.png

"In all chemical classes, the high potency was accompanied by high lipophilicity (in general, cLogP >5) and a TPSA [topological polar surface area] below 75 Å, explaining the good brain penetration. However, the majority of compounds also suffered from hERG binding with IC50s below 1 μM, CyP inhibition and low solubility, particularly at pH = 7.4 (data not shown). These unfavorable ADME properties can likely be attributed to the combination of high lipophilicity and low TPSA.

That they can. By the time they got to that compound 44, some of these problems had been solved (hERG, CyP). But it's still a very hard-to-dose compound (they seem to have gone with a pretty aggressive suspension formulation) and it's still a greasy brick, despite its impressive in vivo activity. And that's my point. Working this way exposes you to one thing after another. Making greasy bricks often leads to potent in vitro assay numbers, but they're harder to get going in vivo. And if you get them to work in the animals, you often face PK and metabolic problems. And if you manage to work your way around those, you run a much higher risk of nonspecific toxicity. So guess what happened here? You have to go to the very end of the paper to find out:

As many of the GSMs described to date, the series detailed in this paper, including 44a, is suffering from suboptimal physicochemical properties: low solubility, high lipophilicity, and high aromaticity. For 44a, this has translated into signs of liver toxicity after dosing in dog at 20 mg/kg. Further optimization of the drug-like properties of this series is ongoing.

Back to the drawing board, in other words. I wish them luck, but I wonder how much of this structure is going to have to be ripped up and redone in order to get something cleaner?

Comments (39) + TrackBacks (0) | Category: Alzheimer's Disease | Cancer | Drug Development | Pharmacokinetics | Toxicology

May 24, 2012

An Oral Insulin Pill?

Email This Entry

Posted by Derek

Bloomberg has an article on Novo Nordisk and their huge ongoing effort to come up with an orally available form of insulin. That's been a dream for a long time now, but it's always been thought to be very close to impossible. The reasons for this are well known: your gut will treat a big protein like insulin pretty much like it treats a hamburger. It'll get digested, chopped into its constituent amino acids, and absorbed as non-medicinally-active bits which are used as raw material once inside the body. That's what digestion is. The gut wall specifically guards against letting large biomolecules through intact.

So you're up against a lot of defenses when you try to make something like oral insulin. Modifying the protein itself to make it more permeable and stable will be a big part of it, and formulating the pill to escape the worst of the gut environments will be another. Even then, you have to wonder about patient-to-patient variability in digestion, intestinal flora, and so on. The dosing is probably going to have to be pretty strict with respect to meals (and the content of those meals).

But insulin dosing is always going to be strict, because there's a narrow window to work in. That's one of the factors that's helped to sink so many other alternative-dosing schemes for it, most famously Pfizer's Exubera. The body's response to insulin in brittle in the extreme. If you take twice as much antihistamine as you should, you may feel funny. If you take twice as much insulin as you should, you're going to be on the floor, and you may stay there.

So I salute Novo Nordisk for trying this. The rewards will be huge if they get it to work, but it's a long way from working just yet.

Comments (32) + TrackBacks (0) | Category: Diabetes and Obesity | Drug Development | Pharmacokinetics

May 16, 2012

Antidepressant Drugs and Cell Membranes

Email This Entry

Posted by Derek

How much do we really know about what small drug molecules do when they get into cells? Everyone involved in this sort of research wonders about this question, especially when it comes to toxicology. There's a new paper out in PLoS One that will cause you to think even harder.

The researchers (from Princeton) looked at the effects of the antidepressant sertraline, a serotonin reuptake inhibitor. They did a careful study in yeast cells on its effects, and that may have some of you raising your eyebrows already. That's because yeast doesn't even have a serotonin transporter. In a perfect pharmacological world, sertraline would do nothing at all in this system.

We don't live in that world. The group found that the drug does enter yeast cells, mostly by diffusion, with a bit of acceleration due to proton motive force and some reverse transport by efflux pumps. (This is worth considering in light of those discussions we were having here the other day about transport into cells). At equilibrium, most (85 to 90%) of the sertaline that makes it into a yeast cell is stuck to various membranes, mostly ones involved in vesicle formation, either through electrostatic forces or buried in the lipid bilayer. It's not setting off any receptors - there aren't any - so what happens when it's just hanging around in there?

More than you'd think, apparently. There's enough drug in there to make some of the membranes curve abnormally, which triggers a local autophagic response. (The paper has electron micrographs of funny-looking Golgi membranes and other organelles). This apparently accounts for the odd fact, noticed several years ago, that some serotonin reuptake inhibitors have antifungal activity. This probably applies to the whole class of cationic amphiphilic/amphipathic drug structures.

The big question is what happens in mammalian cells at normal doses of such compounds. These may well not be enough to cause membrane trouble, but there's already evidence to the contrary. A second big question is: does this effect account for some of the actual neurological effects of these drugs? And a third one is, how many other compounds are doing something similar? The more you look, the more you find. . .

Comments (25) + TrackBacks (0) | Category: Drug Assays | Pharmacokinetics | The Central Nervous System | Toxicology

April 27, 2012

How Do Drugs Get Into Cells? A Vicious Debate.

Email This Entry

Posted by Derek

So how do drug molecules (and others) get into cells, anyway? There are two broad answers: they just sort of slide in through the membranes on their own (passive diffusion), or they're taken up by pores and proteins built for bringing things in (active transport). I've always been taught (and believed) that both processes can be operating in most situations. If the properties of your drug molecule stray too far out of the usual range, for example, your cell activity tends to drop, presumably because it's no longer diffusing past the cell membranes. There are other situations where you can prove that you're hitching a ride on active transport proteins, by administering a known inhibitor of one of these systems to cells and watching your compound suddenly become inactive, or by simply overloading and saturating the transporter.

There's another opinion, though, that's been advanced by Paul Dobson and Douglas Kell at Manchester, and co-workers. Their take is that carrier-mediated transport is the norm, and that passive diffusion is hardly important at all. This has been received with varying degrees of belief. Some people seem to find it a compelling idea, while others regard it as eccentric at best. The case was made a few years ago in Nature Reviews Drug Discovery, and again more recently in Drug Discovery Today:

All cells necessarily contain tens, if not hundreds, of carriers for nutrients and intermediary metabolites, and the human genome codes for more than 1000 carriers of various kinds. Here, we illustrate using a typical literature example the widespread but erroneous nature of the assumption that the ‘background’ or ‘passive’ permeability to drugs occurs in the absence of carriers. Comparison of the rate of drug transport in natural versus artificial membranes shows discrepancies in absolute magnitudes of 100-fold or more, with the carrier-containing cells showing the greater permeability. Expression profiling data show exactly which carriers are expressed in which tissues. The recognition that drugs necessarily require carriers for uptake into cells provides many opportunities for improving the effectiveness of the drug discovery process.

That's one of those death-or-glory statements: if it's right, a lot of us have been thinking about these things the wrong way, and missing out on some very important things about drug discovery as well. But is it? There's a rebuttal paper out in Drug Discovery Today that makes the case for the defense. It's by a long list of pharmacokinetics and pharmacology folks from industry and academia, and has the air of "Let's get this sorted out once and for all" about it:

Evidence supporting the action of passive diffusion and carrier-mediated (CM) transport in drug bioavailability and disposition is discussed to refute the recently proposed theory that drug transport is CM-only and that new transporters will be discovered that possess transport characteristics ascribed to passive diffusion. Misconceptions and faulty speculations are addressed to provide reliable guidance on choosing appropriate tools for drug design and optimization.

Fighting words! More of those occur in the body of the manuscript, phrases like "scientifically unsound", "potentially misleading", and "based on speculation rather than experimental evidence". Here's a rundown of the arguments, but if you don't read the paper, you'll miss the background noise of teeth being ground together.

Kell and Dobson et al. believe that cell membrane have more protein in them, and less lipid, than is commonly thought, which helps make their case for lots of protein transport/not a lot of lipid diffusion. But this paper says that their figures are incorrect and have been misinterpreted. Another K-D assertion is that artificial lipid membranes tend to have many transient aqueous pores in them, which make them look more permeable than they really are. This paper goes to some length to refute this, citing a good deal of prior art with examples of things which should have then crossed such membranes (but don't), and also find fault with the literature that K-D used to back up their own proposal.

This latest paper then goes on to show many examples of non-saturatable passive diffusion, as opposed to active transport, which can always be overloaded. Another big argument is over the agreement between different cell layer models of permeability. Two of the big ones are Caco-2 cells and MDCK cells, but (as all working medicinal chemists know) the permeability values between these two don't always agree, either with each other or with the situation in living systems. Kell and Dobson adduce this as showing the differences between the various transporters in these assays, but this rebuttal points out that there are a lot of experimental differences between literature Caco-2 and MDCK assays that can kick the numbers around. Their take is that the two assays actually agree pretty well, all things considered, and that if transporters were the end of the story that the numbers would be still farther apart.

The blood-brain barrier is a big point of contention between these two camps. This latest paper cites a large pile of literature showing that sheer physical properties (molecular weight, logP) account for most successful approaches to getting compounds into the brain, consistent with passive diffusion, while examples of using active transport are much more scarce. That leads into one of the biggest K-D points, which seems to be one of the ones that drives the existing pharmacokinetics community wildest: the assertion that thousands of transport proteins remain poorly characterized, and that these will come to be seen as the dominant players compared to passive mechanisms. The counterargument is that most of these, as far as we can tell to date, are selective for much smaller and more water-soluble substances than typical drug molecules (all the way from metal ions to things like glycerol and urea), and are unlikely to be important for most pharmaceuticals.

Relying on as-yet-uncharacterized transporters to save one's argument is a habit that really gets on the nerves of the Kell-Dobson critics as well - this paper calls it "pure speculation without scientific basis or evidence", which is about as nasty as we get in the technical literature. I invite interested readers to read both sides of the argument and make up their own minds. As for me, I fall about 80% toward the critics' side. I think that there are probably important transporters that are messing with our drug concentrations and that we haven't yet appreciated, but I just can't imagine that that's the whole story, nor that there's no such thing as passive diffusion. Thoughts?

Comments (37) + TrackBacks (0) | Category: Drug Assays | Pharma 101 | Pharmacokinetics

April 4, 2012

The Artificial Intelligence Economy?

Email This Entry

Posted by Derek

Now here's something that might be about to remake the economy, or (on the other robotic hand) it might not be ready to just yet. And it might be able to help us out in drug R&D, or it might turn out to be mostly beside the point. What the heck am I talking about, you ask? The so-called "Artificial Intelligence Economy". As Adam Ozimek says, things are looking a little more futuristic lately.

He's talking about things like driverless cars and quadrotors, and Tyler Cowen adds the examples of things like Apple's Siri and IBM's Watson, as part of a wider point about American exports:

First, artificial intelligence and computing power are the future, or even the present, for much of manufacturing. It’s not just the robots; look at the hundreds of computers and software-driven devices embedded in a new car. Factory floors these days are nearly empty of people because software-driven machines are doing most of the work. The factory has been reinvented as a quiet place. There is now a joke that “a modern textile mill employs only a man and a dog—the man to feed the dog, and the dog to keep the man away from the machines.”

The next steps in the artificial intelligence revolution, as manifested most publicly through systems like Deep Blue, Watson and Siri, will revolutionize production in one sector after another. Computing power solves more problems each year, including manufacturing problems.

Two MIT professors have written a book called Race Against the Machine about all this, and it appears to be sort of a response to Cowen's earlier book The Great Stagnation. (Here's an article of theirs in The Atlantic making their case).

One of the export-economy factors that it (and Cowen) bring up is that automation makes a country's wages (and labor costs in general) less of a factor in exports, once you get past the capital expenditure. And as the size of that expenditure comes down, it becomes easier to make that leap. One thing that means, of course, is that less-skilled workers find it harder to fit in. Here's another Atlantic article, from the print magazine, which looked at an auto-parts manufacturer with a factory in South Carolina (the whole thing is well worth reading):

Before the rise of computer-run machines, factories needed people at every step of production, from the most routine to the most complex. The Gildemeister (machine), for example, automatically performs a series of operations that previously would have required several machines—each with its own operator. It’s relatively easy to train a newcomer to run a simple, single-step machine. Newcomers with no training could start out working the simplest and then gradually learn others. Eventually, with that on-the-job training, some workers could become higher-paid supervisors, overseeing the entire operation. This kind of knowledge could be acquired only on the job; few people went to school to learn how to work in a factory.
Today, the Gildemeisters and their ilk eliminate the need for many of those machines and, therefore, the workers who ran them. Skilled workers now are required only to do what computers can’t do (at least not yet): use their human judgment.

But as that article shows, more than half the workers in that particular factory are, in fact, rather unskilled, and they make a lot more than their Chinese counterparts do. What keeps them employed? That calculation on what it would take to replace them with a machine. The article focuses on one of those workers in particular, named Maddie:

It feels cruel to point out all the Level-2 concepts Maddie doesn’t know, although Maddie is quite open about these shortcomings. She doesn’t know the computer-programming language that runs the machines she operates; in fact, she was surprised to learn they are run by a specialized computer language. She doesn’t know trigonometry or calculus, and she’s never studied the properties of cutting tools or metals. She doesn’t know how to maintain a tolerance of 0.25 microns, or what tolerance means in this context, or what a micron is.

Tony explains that Maddie has a job for two reasons. First, when it comes to making fuel injectors, the company saves money and minimizes product damage by having both the precision and non-precision work done in the same place. Even if Mexican or Chinese workers could do Maddie’s job more cheaply, shipping fragile, half-finished parts to another country for processing would make no sense. Second, Maddie is cheaper than a machine. It would be easy to buy a robotic arm that could take injector bodies and caps from a tray and place them precisely in a laser welder. Yet Standard would have to invest about $100,000 on the arm and a conveyance machine to bring parts to the welder and send them on to the next station. As is common in factories, Standard invests only in machinery that will earn back its cost within two years. For Tony, it’s simple: Maddie makes less in two years than the machine would cost, so her job is safe—for now. If the robotic machines become a little cheaper, or if demand for fuel injectors goes up and Standard starts running three shifts, then investing in those robots might make sense.

At this point, some similarities to the drug discovery business will be occurring to readers of this blog, along with some differences. The automation angle isn't as important, or not yet. While pharma most definitely has a manufacturing component (and how), the research end of the business doesn't resemble it very much, despite numerous attempts by earnest consultants and managers to make it so. From an auto-parts standpoint, there's little or no standardization at all in drug R&D. Every new drug is like a completely new part that no one's ever built before; we're not turning out fuel injectors or alternators. Everyone knows how a car works. Making a fundamental change in that plan is a monumental challenge, so the auto-parts business is mostly about making small variations on known components to the standards of a given customer. But in pharma - discovery pharma, not the generic companies - we're wrenching new stuff right out of thin air, or trying to.

So you'd think that we wouldn't be feeling the low-wage competitive pressure so much, but as the last ten years have shown, we certainly are. Outsourcing has come up many a time around here, and the very fact that it exists shows that not all of drug research is quite as bespoke as we might think. (Remember, the first wave of outsourcing, which is still very much a part of the business, was the move to send the routine methyl-ethyl-butyl-futile analoging out somewhere cheaper). And this takes us, eventually, to the Pfizer-style split between drug designers (high-wage folks over here) and the drug synthesizers (low-wage folks over there). Unfortunately, I think that you have to go the full reducio ad absurdum route to get that far, but Pfizer's going to find out for us if that's an accurate reading.

What these economists are also talking about is, I'd say, the next step beyond Moore's Law: once we have all this processing power, how do we use it? The first wave of computation-driven change happened because of the easy answers to that question: we had a lot of number-crunching that was being done by hand, or very slowly by some route, and we now had machines that could do what we wanted to do more quickly. This newer wave, if wave it is, will be driven more by software taking advantage of the hardware power that we've been able to produce.

The first wave didn't revolutionize drug discovery in the way that some people were hoping for. Sheer brute force computational ability is of limited use in drug discovery, unfortunately, but that's not always going to be the case, especially as we slowly learn how to apply it. If we really are starting to get better at computational pattern recognition and decision-making algorithms, where could that have an impact?

It's important to avoid what I've termed the "Andy Grove fallacy" in thinking about all this. I think that it is a result of applying first-computational-wave thinking too indiscriminately to drug discovery, which means treating it too much like a well-worked-out human-designed engineering process. Which it certainly isn't. But this second-wave stuff might be more useful.

I can think of a few areas: in early drug discovery, we could use help teasing patterns out of large piles of structure-activity relationship data. I know that there are (and have been) several attempts at doing this, but it's going to be interesting to see if we can do it better. I would love to be able to dump a big pile of structures and assay data points into a program and have it say the equivalent of "Hey, it looks like an electron-withdrawing group in the piperidine series might be really good, because of its conformational similarity to the initial lead series, but no one's ever gotten back around to making one of those because everyone got side-tracked by the potency of the chiral amides".

Software that chews through stacks of PK and metabolic stability data would be worth having, too, because there sure is a lot of it. There are correlations in there that we really need to know about, that could have direct relevance to clinical trials, but I worry that we're still missing some of them. And clinical trial data itself is the most obvious place for software that can dig through huge piles of numbers, because those are the biggest we've got. From my perspective, though, it's almost too late for insights at that point; you've already been spending the big money just to get the numbers themselves. But insights into human toxicology from all that clinical data, that stuff could be gold. I worry that it's been like the concentration of gold in seawater, though: really there, but not practical to extract. Could we change that?

All this makes me actually a bit hopeful about experiments like this one that I described here recently. Our ignorance about medicine and human biochemistry is truly spectacular, and we need all the help we can get in understanding it. There have to be a lot of important things out there that we just don't understand, or haven't even realized the existence of. That lack of knowledge is what gives me hope, actually. If we'd already learned what there is to know about discovering drugs, and were already doing the best job that could be done, well, we'd be in a hell of a fix, wouldn't we? But we don't know much, we're not doing it as well as we could, and that provides us with a possible way out of the fix we're in.

So I want to see as much progress as possible in the current pattern-recognition and data-correlation driven artificial intelligence field. We discovery scientists are not going to automate ourselves out of business so quickly as factory workers, because our work is still so hypothesis-driven and hard to define. (For a dissenting view, with relevance to this whole discussion, see here). It's the expense of applying the scientific method to human health that's squeezing us all, instead, and if there's some help available in that department, then let's have it as soon as possible.

Comments (32) + TrackBacks (0) | Category: Drug Assays | Drug Development | Drug Industry History | In Silico | Pharmacokinetics | Toxicology

May 10, 2010

Unlovely Polyphenols

Email This Entry

Posted by Derek

Here's a new paper from the folks at the Burnham Institute and UCSD on a new target for vaccinia virus. They're going after a virulence factor (N1L) through computational screening, which is a challenge, since this is a protein-protein interaction.

They pulled out a number of structures, which have some modest activity in cell infection assays. In addition, they showed through calorimetry that the compounds do appear to be affecting the target protein, specifically its equilibrium between monomeric and oligomeric forms. But the structures of their best hits. . .well, here's the table. You can ignore compounds 6 and 8; they show up as cytotoxic. But the whole list is pretty ghastly, at least to my eyes.

These sorts of highly aromatic polyphenol structures have two long traditions in medicinal chemistry: showing activity in assays, for the first part, and not being realizable as actual drugs, for the second. There's no doubt that they can do a lot of things; it's just that getting them to do them in a real-world situation is not trivial. Part of the problem is specificity (and associated toxicity) and part of it is pharmacokinetics. As you'd imagine, these compounds can have rather funky clearance behavior, what with all those phenols.

So I'd regard these as proof-of-concept compounds that validate N1L as a target. I think that we'll need to wait for someone to format up an assay for high-throughput (non-virtual) screening to see if something more tractable comes up. Either that, or rework the virtual screens on the basis that we've seen enough polyphenols come up on this target already. . .

Note: readers of the paper will note that our old friend resveratrol turns up as an active compound as well. It's very much in the polyphenol tradition; make of that what you will.

Comments (25) + TrackBacks (0) | Category: In Silico | Infectious Diseases | Pharmacokinetics

April 6, 2010

A Brief and Not At All Intemperate Evaluation of the Current Literature

Email This Entry

Posted by Derek

In keeping with my Modest Literature Proposal from earlier this year, I would like to briefly point out a Journal of Medicinal Chemistry paper on potential Alzheimer's therapies. Whose lead compound has a nine-carbon alkyl chain in the middle of it. And weighs 491. And has two quaternary nitrogens. Which structural features will, in all likelihood, lead to said compound demonstrating roughly this amount of blood-brain barrier penetration, assuming it reaches sufficient blood levels to get that far. That is all.

Comments (22) + TrackBacks (0) | Category: Alzheimer's Disease | Pharmacokinetics | The Scientific Literature

April 1, 2010

What Do Nanoparticles Really Look Like?

Email This Entry

Posted by Derek

We're all going to be hearing a lot about nanoparticles in the next few years (some may feel as if they've already heard quite enough, but there's nothing to be done about that). The recent report of preliminary siRNA results using them as a delivery system will keep things moving along with even more interest. So it's worth checking out this new paper, which illustrates how we're going to have to think about these things.

The authors show that it's not necessarily the carefully applied coat proteins of these nanoparticles that are the first thing a cell notices. Rather, it's the second sphere of endogenous proteins that end up associated with the particle, which apparently can be rather specific and persistent. The authors make their case with admirable understatement:

The idea that the cell sees the material surface itself must now be re-examined. In some specific cases the cell receptor may have a higher preference for the bare particle surface, but the time scale for corona unbinding illustrated here would still typically be expected to exceed that over which other processes (such as nonspecific uptake) have occurred. Thus, for most cases it is more likely that the biologically relevant unit is not the particle, but a nano-object of specified size, shape, and protein corona structure. The biological consequences of this may not be simple.

Update: fixed this post by finally adding the link to the paper!

Comments (4) + TrackBacks (0) | Category: Biological News | Pharmacokinetics

March 25, 2010

Nanoparticles and RNA: Now In Humans

Email This Entry

Posted by Derek

In recent years, readers of the top-tier journals have been bombarded with papers on nanotechnology as a possible means of drug delivery. At the same time, there's been a tremendous amount of time and money put into RNA-derived therapies, trying to realize the promise of RNA interference for human therapies. Now we have what I believe is the first human data combining both approaches.

Nature has a paper from CalTech, UCLA, and several other groups with the first data on a human trial of siRNA delivered through targeted nanoparticles. This is only the second time siRNA has been tried systemically on humans at all. Most of the previous clinical work has been involved direct injection of various RNA therapies into the eye (which is a much less hostile environment than the bloodstream), but in 2007, a single Gleevec-resistant leukaemia patient was dosed in a nontargeted fashion.

In this study, metastatic melanoma patients, a population that is understandably often willing to put themselves out at the edge of clinical research, were injected with engineered nanoparticles from Calando Pharmaceuticals, containing siRNA against the ribonucleotide reductase M2 (RRM2) target, which is known to be involved in malignancy. The outside of the particles contained a protein ligand to target the transferrin receptor, an active transport system known to be upregulated in tumor cells. And this was to be the passport to deliver the RNA.

A highly engineered system like this addresses several problems at once: how do you keep the RNA you're dosing from being degraded in vivo? (Wrap it up in a polymer - actually, two different ones in spherical layers). How do you deliver it selectively to the tissue of interest? (Coat the outside with something that tumor cells are more likely to recognize). How do you get the RNA into the cells once it's arrived? (Make that recognition protein is something that gets actively imported across the cell membrane, dragging everything else along with it). This system had been tried out in models all the way up to monkeys, and in each case the nanoparticles could be seen inside the targeted cells.

And that was the case here. The authors report biopsies from three patients, pre- and post-dosing, that show uptake into the tumor cells (and not into the surrounding tissue) in two of the three cases. What's more, they show that a tissue sample has decreased amounts of both the targeted messenger RNA and the subsequent RRM2 protein. Messenger RNA fragments showed that this reduction really does seem to be taking place through the desired siRNA pathway (there's been a lot of argument over this point in the eye therapy clinical trials).

It should be noted, though, that this was only shown for one of the patients, in which the pre- and post-dosing samples were collected ten days apart. In the other responding patient, the two samples were separated by many months (making comparison difficult), and the patient that showed no evidence of nanoparticle uptake also showed, as you'd figure, no differences in their RRM2. Why Patient A didn't take up the nanoparticles is as yet unknown, and since we only have these three patients' biopsies, we don't know how widespread this problem is. In the end, the really solid evidence is again down to a single human.

But that brings up another big question: is this therapy doing the patients any good? Unfortunately, the trial results themselves are not out yet, so we don't know. That two-out-of-three uptake rate, although a pretty small sample, could well be a concern. The only between-the-lines inference I can get is this: the best data in this paper is from patient C, who was the only one to do two cycles of nanoparticle therapy. Patient A (who did not show uptake) and patient B (who did) had only one cycle of treatment, and there's probably a very good reason why. These people are, of course, very sick indeed, so any improvement will be an advance. But I very much look forward to seeing the numbers.

Comments (8) + TrackBacks (0) | Category: Biological News | Cancer | Clinical Trials | Pharmacokinetics

March 18, 2010

Make Your Compound Go Away

Email This Entry

Posted by Derek

I'm not sure that the term will catch on, but this new paper proposes "antedrug" to describe a compound that's deliberately designed to be cleaved quickly to something inactive. I see where they're coming from - reverse of "prodrug" - but in spoken English it's too close to "anti-drug". Hasn't someone come up with this concept before? Perhaps they didn't bother to name it. . .

UpdateSomeone at AZ sends along this earlier reference to "antedrug".

Comments (23) + TrackBacks (0) | Category: Pharmacokinetics

March 2, 2010

Why You Don't Want to Make Death-Star-Sized Drugs

Email This Entry

Posted by Derek

I was just talking about greasy compounds the other day, and reasons to avoid them. Right on cue, there's a review article in Expert Opinion in Drug Discovery on lipophilicity. It has some nice data in it, and I wanted to share a bit of it here. It's worth noting that you can make your compounds too polar, as well as too greasy. Check these out - the med-chem readers will find them interesting, and who knows, others might, too:
MW350%20graph%20jpeg.jpg
MW500%20graph%20jpeg.jpg
So, what are these graphs? They show how well compound cross the membranes of Caco-2 cells, a standard assay for permeability. These cells (derived from human colon tissue) have various active-transport pumps going (in both directions), and you can grow them in a monolayer, expose one side to a solution of drug substance, and see how much compound appears on the other side and how quickly. (Of course, good old passive diffusion is also operating, too - a lot of compounds cross membranes by just soaked on through them).

Now, I have problems with extrapolating Caco-2 data too vigorously to the real world - if you have five drug candidates from the same series and want to rank order them, I'd suggest getting real animal data rather than rely on the cell assay. The array of active transport systems (and their intrinsic activity) may well not match up closely enough to help you - as usual, cultured cell lines don't necessarily match reality. But as a broad measure of whether a large set of compounds has a reasonable chance of getting through cell membranes, the assay's not so bad.

First, we have a bunch of compounds with molecular weights between 350 and 400 (a very desirable space to occupy). The Y axis is the partitioning between the two sides of the cells, and X axis is LogD, a standard measure of compound greasiness. That thin blue line is the cutoff for 100 nanomoles/sec of compound transport, so the green compounds above it travel across the membrane well, and the red ones below it don't cross so readily. You'll note that as you go to the left (more and more polar, as measured by logD), the proportion of green compounds gets smaller and smaller. They're rather hang out in the water than dive through any cell membranes, thanks.

So if you want a 50% chance of hitting that 100 nm/sec transport level, then you don't want to go much more polar than a LogD of 2. But that's for compounds in the 350-400 weight range - how about the big heavyweights? Those are shown in the second graph, for compounds greater than 500. Note that the distribution has scrunched disturbingly. Now almost everything is lousy, and if you want that 50% chance of good penetration, you're going to have to get up to a logD of at least 4.5.

That's not too good, because you're always fighting a two-front war here. If you make your compounds that greasy (or more) to try to improve their membrane-crossing behavior, you're opening yourself up (as I said the other day) to more metabolic clearance and more nonspecific tox, as your sticky compounds glop onto all sorts of things in vivo. (They'll be fun to formulate, too). Meanwhile, if you dip down too far into that really-polar left-hand side, crossing your fingers for membrane crossing, you can slide into the land of renal clearance, as the kidneys vacuum out your water-soluble wonder drug and give your customers very expensive urine.

But in general, you have more room to maneuver in the lower molecular weight range. The humungous compounds tend to not get through membranes at reasonable LogD values. And if you try to fix that by moving to higher LogD, they tend to get chewed up or do unexpectedly nasty things in tox. Stay low and stay happy.

Comments (24) + TrackBacks (0) | Category: Drug Assays | Pharma 101 | Pharmacokinetics

November 28, 2009

Recommended Books For Medicinal Chemists, Part One

Email This Entry

Posted by Derek

I asked recently for suggestions on the best books on med-chem topics, and a lot of good ideas came in via the comments and e-mail. Going over the list, the most recommended seem to be the following:

For general medicinal chemistry, you have Bob Rydzewski's Real World Drug Discovery: A Chemist's Guide to Biotech and Pharmaceutical Research. Many votes also were cast for Camille Wermuth's The Practice of Medicinal Chemistry. For getting up to speed, several readers recommend Graham Patrick's An Introduction to Medicinal Chemistry. And an older text that has some fans is Richard Silverman's The Organic Chemistry of Drug Design and Drug Action.

Process chemistry is its own world with its own issues. Recommended texts here are Practical Process Research & Development by Neal Anderson and Process Development: Fine Chemicals from Grams to Kilograms by Stan Lee (no, not that Stan Lee) and Graham Robinson.

Case histories of successful past projects are found in Drugs: From Discovery to Approval by Rick Ng and also in Walter Sneader's Drug Discovery: A History.

Another book that focuses on a particular (important) area of drug discovery is Robert Copeland's Evaluation of Enzyme Inhibitors in Drug Discovery.

For chemists who want to brush up on their biology, readers recommend Terrence Kenakin's A Pharmacology Primer, Third Edition: Theory, Application and Methods and Molecular Biology in Medicinal Chemistry by Nogrady and Weaver.

Overall, one of the most highly recommended books across the board comes from the PK end of things: Drug-like Properties: Concepts, Structure Design and Methods: from ADME to Toxicity Optimization by Kerns and Di. For getting up to speed in this area, there's Pharmacokinetics Made Easy by Donald Birkett.

In a related field, the standard desk reference for toxicology seems to be Casarett & Doull's Toxicology: The Basic Science of Poisons. Since all of us make a fair number of poisons (as we eventually discover), it's worth a look.

There's a first set - more recommendations will come in a following post (and feel free to nominate more worthy candidates if you have 'em).

Comments (21) + TrackBacks (0) | Category: Book Recommendations | Drug Development | Life in the Drug Labs | Pharmacokinetics | The Scientific Literature | Toxicology

June 15, 2009

Ugliness Defined

Email This Entry

Posted by Derek

Yesterday's post on so-called "ugly" molecules seems to have touched a few nerves. Perhaps I should explain my terms, since ugliness is surely in the eye of the beholder. I'm not talking about particular functional groups as much as I'm talking about the whole package.

First off, a molecule that does what it's supposed to do in vivo is (by my definition) not truly ugly. The whole point of our job as medicinal chemists is to make active compounds - preferably with only the activity that we want - and if that's been accomplished there can be no arguing. Of course, "accomplished" has different meanings at different stages of development. Very roughly, the mileposts (for those of us in discovery research) are:

1. Hitting the target in vitro.
2. Showing selectivity in vitro.
3. Showing blood levels in vivo.
4. Showing activity in vivo.
5. No tox liabilities in vivo.

And these all have their gradations. My point is that if you've made it through these, at least to a reasonable extent, your molecule has already distinguished itself from the herd. The problem is that a lot of structures will fly through the first couple of levels (the in vitro ones), but have properties that will make it much harder for them to get the rest of the way. High molecular weight, notable lack of polarity (high logP), and notable lack of solubility are three of the most important warning signs, and those are what (to me) make an ugly molecule, not some particular functional group.

My belief is that, other things being equal, you should guard against making things that have trouble in these areas. You may well find yourself being forced (by the trends of your project) into one or more of them; that happens all the time, unfortunately. But you shouldn't go there if you don't have to. It's also true that there are molecules that have made it all the way through, that are out there on the market and still have these liabilities. But that shouldn't be taken as a sign that you should go the same route.

Ars longa, vita brevis. There's only so much time and so much money for a given project, and your time is best spent working in the space that has the best chance of delivering a drug. A 650 molecular weight compound with five trifluoromethyl groups is not inhabiting that space. It's not impossible that such a compound will make it, but I think we can all agree that its chances are lower compared to something smaller and less greasy. If the only thing you can get to work is a whopper like that, well, good luck to all concerned. But we have to depend on luck too much already in this business, and there's no reason to bring in more.

Comments (13) + TrackBacks (0) | Category: Drug Development | Life in the Drug Labs | Pharmacokinetics

June 9, 2009

Instant Med-Chem Wisdom

Email This Entry

Posted by Derek

I didn't note it here when it came out last year, but I wanted to recommend this paper to all the readers who are medicinal chemists. It's an effort by M. Paul Gleeson of GSK to generalize some rules from huge piles of oral dosing data in the company's files. It's all boiled down to a set of charts, for different classes of compounds (neutral, acidic, basic, and zwitterionic), and you can see the effects of changing molecular weight and/or polarity on things like bioavailibility, potential for hERG problems, clearance, etc.

There are no major surprises in the charts. But it's very useful to have all these "rules of thumb" in one spot, and to have them backed up by plenty of data. For experienced medicinal chemists, it's a distillation of everything that we should have been learning. And for those starting out, it's a way to get a fast understanding of what matters when you're making new structures. Check it out!

Update: for a much more sceptical take, see here.

Comments (4) + TrackBacks (0) | Category: Life in the Drug Labs | Pharmacokinetics

June 2, 2009

A Deuterium Deal

Email This Entry

Posted by Derek

Well, there's someone who certainly believes in the deuterated-drug idea! GlaxoSmithKline has announced today that they've signed a deal with Concert Pharmaceuticals to develop these. There's a $35 million payment upfront, which I'm sure will be welcome in this climate, and various milestone and royalty arrangements from there on out. I know that the press story says that it's a "potential billion dollar deal", but you have to make a useless number of assumptions to arrive at that figure. Let's just say that the amount will be somewhere between that billion-dollar figure and. . .well, the $35 million that Glaxo's just put up.

Where things will eventually land inside that rather wide range is impossible to say. No one's taken such a compound all the way through development, and every one of them is going to be different. (Deuterium might be a good idea, but it ain't magic.) It looks like the first compound up for evaluation will be an HIV protease inhibitor, CTP-518, which is a deuterated version of someone's existing compound - Concert has filed paten applications on deuterated versions of both darunavir (WO2009055006) and atazanavir (WO2008156632). The hope is that CTP-518 will have an improved enough metabolic profile to eliminate the need to add ritonavir into the drug cocktail.

The company is also providing deuterated versions of three of GSK's own pipeline compounds for evaluation, which is interesting, since that's the sort of thing that Glaxo could do itself. In fact, that's one of the key points to the whole deuterated-compound idea: the window of opportunity. Deuteration isn't difficult chemistry, and the applications for it in improving PK and tox profiles are pretty obvious (see below). It's a good bet that drug company patent applications will hencrforth include claims (and exemplified compounds) to make sure that deuterated versions of drug candidates can't be poached away by someone else. This strategy has a limited shelf life, but it's long enough to be potentially very profitable indeed.

One more note about that word "obvious". Now that people are raising all kinds of money and interest with the idea, sure, it looks obvious. And I'm sure that it's a thought that many people have had before - and then said "Nah, that's too funny-sounding. Might not work. And besides, you might not be able to patent it. And besides, if it were that good an idea, someone else would have already done it. There must be a good reason why no one's done it, you know". Getting up the nerve to try these things, that's the hard part. Roger Tung and Concert (and the other players in this field) deserve congratulations for not being afraid of the obvious.

Comments (25) + TrackBacks (0) | Category: Business and Markets | Drug Development | Infectious Diseases | Pharmacokinetics | Who Discovers and Why

February 17, 2009

Heavy Atoms, Heavy Profits?

Email This Entry

Posted by Derek

Carbon 12, nitrogen 14 – for that matter, hydrogen 1. Everyone who’s had to study even a bit of chemistry has had to learn the molecular weights of the elements, figure molecular weights from formulas, and so on. But these numbers aren’t quite as round and even as they look, and the consequences of that are sometimes surprising. And at the moment, at least three companies are trying to turn the whole idea into a huge amount of money.

My scientific audience will have guessed immediately that I’m talking about isotopes (although some of them may well be wondering where the pile of money comes into it). For those who don’t make a living at this sort of thing and have put such topics out of their minds, it’s the number of protons in an atom’s nucleus (the atomic number) that determines what sort of element it is. Carbon, for example, always has six protons. But there are neutrons in there, too, and those can vary a bit. Six protons and six neutrons gives you a nucleus of carbon-12, which is the most common. But one out of every hundred or so carbon atoms has seven neutrons instead of six: C-13. That’s a perfectly stable isotope of carbon, and is much beloved by chemists for its behavior in NMR experiments. If you push that neutron count too far, though, you get unstable radioactive nuclei. That’s where the famous carbon-14 comes into the picture (six protons, eight neutrons). You can have carbon-11, too, although it’s pretty hot stuff. Hydrogen, for its part, has the usual one-proton nucleus in its most common form, a one-proton-one-neutron stable form called deuterium, and a radioactive form with two neutrons called tritium, found in isotope labs and the innards of hydrogen bombs).

Radioactive isotopes have a long history in medicine and biochemistry, both as therapeutic agents (for cancer) and as tracers. But what about stable isotopes? Until recent years, not as much. But modern mass spectrometry machines are so good at what they do that they’ve eaten into a lot of the applications that used to be reserved for radioactive isotopes – more on that in another blog post; there are some ingenious tricks there. And those three companies I mentioned are trying to take advantage of yet another property, known as the kinetic isotope effect.

Imagine a bond between a hydrogen and a carbon as being between two metal balls, one of them twelve times as heavy as the other, connected by a spring. This is about as simplistic a picture of a carbon-hydrogen bond as you could possibly come up with, but for this purpose that model works disconcertingly well. Imagine then replacing the smaller ball with one that weighs twice as much as the original one; that’s a replacement of hydrogen with deuterium. Now, how will the behavior of that springy system change?

Well, that’s sophomore physics, weights and springs, and that’ll tell you that it’s now harder to twang the second system around. We see that exact effect in chemistry. A carbon-deuterium bond breaks about six or seven times slower than a carbon-hydrogen bond under room-temperature conditions. So where exactly is the big money in this effect?

Consider what happens to a drug when it’s ingested. Through the gut wall it goes, into the hepatic portal vein, and directly into that vast shredder we know as the liver. Various enzymes go to work tearing your unrecognized drug structure apart, the better to sluice it out through the kidneys as quickly as possible. And there’s the opportunity: a great many of those enzymatic reactions involve breaking carbon-hydrogen bonds. What if they were deuteriums instead?

That’s what Auspex, Protia, and Concert Pharmaceuticals are all working on. They’re taking existing drugs, whose metabolic fates are known, and battening their structures down with deuterium atoms in hopes of improving their half-lives and general behavior. And thus far, the idea seems to be working out. Auspex announced last fall that they'd seen good results (PDF) in the clinic with a deuterated version of venlafaxine (brand name Effexor, a well-known antidepressant. Concert, for their part, has announced that they've improved the antibiotic linezolid, sold as Zyvox. Protia - well, as far as I can see, Protia has been very quietly filing patents on deuterated versions of every big-selling drug that they can think of. What they're doing in the lab seems to still be under wraps.

Is this going to work? Good question. To a first approximation, you'd think it probably would, particularly for drugs whose main liabilities are poor pharmacokinetics (or side effects driven by a particular metabolite). But there are complications. For one thing, deuterium is not completely innocuous in vivo. I strongly doubt that the dosages of deuterated pharmaceuticals could present any kind of problem, but if you load up a higher organism with exchangable deuterium, trouble ensues. For humans, it would seem that you could, in theory, go a week or so on a few liters a day of straight deuterated water before you'd have to worry, which is nonetheless an experiment that I would strongly discourage. So the amount of deuterium picked up through metabolism of a prescription drug should have no effect - but there's always the possibility that the FDA, in its risk-averse mode, might make you jump through some extra hoops to prove that.

Another (much more real) risk is that the whole strategy will burn itself out. Clearly, the existing startups are working off the fact that no one has traditionally bothered to claim deuterated versions of their patented compounds. That is surely already changing, and if something hits the market it'll change big-time, reminiscent of Sepracor's old business model of grabbing unclaimed metabolites and enantiomers. And, of course, the three companies in this space are surely already throwing elbows into each other's IP space already.

But there's still a window of opportunity, and these folks are going for it. Isotope effects could end up being rather more immediately valuable than anyone ever knew. . .

Comments (32) + TrackBacks (0) | Category: Drug Development | Pharmacokinetics

February 11, 2009

A Med-Chem Book Recommendation

Email This Entry

Posted by Derek

As per the comments to the last post, this book, Drug-like Properties: Concepts, Structure Design and Methods: from ADME to Toxicity Optimization, looks like a very nice overview of these issues for the practicing medicinal chemist. From what I've seen of it, there's a lot of you-need-to-know-this information for people getting up to speed, and it also looks to have collected a lot of more advanced topics into one convenient place. If this is your thing, give it a look.

Comments (6) + TrackBacks (0) | Category: Book Recommendations | Drug Development | Pharmacokinetics | Toxicology

December 1, 2008

Prodrugs: How the Pros Do It?

Email This Entry

Posted by Derek

I’m going to write this morning about a question that actually came up among several of us at the train station this morning. I’m on a route that takes a lot of people into Cambridge, so we have a good proportion of pharma/biotech people on board. And today we got to talking about prodrugs: like 'em or hate 'em?

For those not in the business, a prodrug is a masked form of an active drug, designed to be activated once it’s dosed. That’s generally done by allowing the normal metabolic processes of the body to clip some group off, revealing the real drug. Various esters are the most common prodrugs, since that’s about the easiest group to have fall apart on you. (Enalapril / enalaprilat is a classic example, and aspirin is an even more classic one).

And esters illustrate another point about prodrugs: no one develops them unless they have to, as far as I’m concerned. After all, if your compound works fine in its native form, why get fancy? No, I think you turn to the prodrug strategy when there’s something wrong. Maybe the active form of the drug isn’t well absorbed from the gut, or has too short a half-life in the blood, or doesn’t distribute to the right organs. The differences in these properties between carboxylic acids and their esters can be particularly dramatic.

There are other ways to do it. Some compounds are oxidized by liver enzymes to turn into their active forms, for example. But all of these ideas suffer from several complications, which is why I’ve always regarded them as acts of desperation. For one thing, all these metabolic pathways vary a good deal between species. That’s a problem for any drug development effort, of course, but you’ve doubled those headaches (at least) by working with a prodrug. Now you have to wonder, when you finally get to humans, if the conversion of the initial compound will take place to the same extent, as well as about the clearance of the active drug (and, for that matter, the non-productive clearance of the prodrug molecule itself). For a development group, taking on a prodrug can be like taking on two drugs at the same time.

There have been all sorts of ingenious ideas along these lines over the years. It’s been my impression that delivery methods of this sort have been more popular among academic medicinal chemistry groups than they have in industry, to be honest. There are all sorts of schemes for targeting active substances to particular organs, or for getting them into hard-to-reach areas like the brain through use of exotic prodrug groups. Most of these don’t survive exposure to the real world, but I can’t turn up my nose at them, either, because these are all things that we would like to be able to do in this business. If weird ideas don’t get tried, we’ll never find out if any of them actually work.

And there have been some real successes in the prodrug field, and it’s always an idea that comes up whenever a lead compound series shows some undesirable absorption or excretion. I’ve broached the topic a few times myself on past projects. But every time, we’ve been able to solve the problem by less drastic means – a new formulation, a salt form, or by just plain old going to a different compound in the end. If you can do it by some combination of those, I'd say you're probably better off in the end. (For those who are taking the plunge, you can probably learn about as much as can be learned from the literature here). Here's an even more recent review.

Comments (12) + TrackBacks (0) | Category: Drug Development | Pharmacokinetics

October 17, 2008

Down The Chute in Phase III

Email This Entry

Posted by Derek

Here's a good article over at the In Vivo Blog on this year's crop of expensive Phase III failures. They've mostly been biotech drugs (vaccines and the like), but it's a problem everywhere. As In Vivo's Chris Morrison puts it:

Look, drugs fail. That happens because drug development is very difficult. Even Phase III drugs fail, probably more than they used to, thanks to stiffer endpoints and attempts to tackle trickier diseases. Lilly Research Laboratory president Steve Paul lamented at our recent PSA meeting that Phase III is "still pretty lousy," in terms of attrition rates -- around 50%. And not always for the reasons you'd expect. "You shouldn't be losing Phase III molecules for lack of efficacy," he said, but it's happening throughout the industry.

Ah, but efficacy has come up in the world as a reason for failure. Failures due to pharmacokinetics have been going down over the years as we do a better job in the preclinical phase (and as we come up with more formulation options). Tox failures are probably running at their usual horrifying levels; I don't think that those have changed, because we don't understand toxicology much better (or worse) than we ever did.

But as we push into new mechanisms, we're pushing into territory that we don't understand very well. And many of these things don't work the way that we think that they do. And since we don't have good animal models - see yesterday's post - we're only going to find out about these things later on in the clinic. Phase II is where you'd expect a lot of these things to happen, but it's possible to cherry-pick things in that stage to get good enough numbers to continue. So on you go to Phase III, where you spend the serious money to find out that you've been wrong the whole time.

So we get efficacy failures (and we've been getting them for some time - see this piece from 2004). And we're getting them in Phase III because we're now smart and resourceful enough to worm our way through Phase II too often. The cure? To understand more biology. That's not a short-term fix - but it's the only one that's sure to work. . .

Comments (16) + TrackBacks (0) | Category: Clinical Trials | Drug Development | Drug Industry History | Pharmacokinetics | Toxicology

August 29, 2008

Sticky Containers, Vanishing Drugs

Email This Entry

Posted by Derek

There’s no end to the variables that can kick your data around in drug discovery. If you concentrate completely on all the things that could go wrong, though, you’ll be too terrified to run any useful experiments. You have to push on, but stay alert. It’s like medical practice: most of the time you don’t have to worry about most of the possibilities, but you need to recognize the odd ones when they show up.

One particular effect gets rediscovered from time to time, and I’ve just recently had to take it into account myself: the material that your vials and wells are made out of. That’s generally not a consideration for organic chemists, since we work mostly in glass, and on comparatively large scale. There are some cases where glass (specifically the free OH groups on its surface) can mess up really sensitive compounds, but in drug discovery we try not to work with things that are that temperamental.

But when you move to the chemistry/biology interface, things change. Material effects are pretty well-known among pharmacokinetics people, for example, although not all medicinal chemists are aware of them. The reason is that PK samples (blood, plasma, tissue) tend to have very small amounts of the desired analyte in them, inside a sea of proteins and other gunk. If you’re going down to nanograms (or less) of the substance of interest, it doesn’t take much to mess up your data.

And as it turns out, different sorts of plastics will bind various compounds to widely varying degrees. Taxol (OK, taxotere) is a notorious example, sticking to the sides of various containers like crazy. And you never know when you're going to run into one of those yourself. I know of a drug discovery project whose PK numbers were driving everyone crazy (weirdly variable, and mostly suggesting physically impossible levels of drug clearance) until they figured out that this was the problem. If you took a stock solution of the compound and ran it though a couple of dilutions while standing in the standard plastic vials, nothing was left. Wash the suckers out with methanol, though, and voila.

Here's a paper which suggests that polystyrene can be a real offender, and from past experience I can tell you to look out for polypropylene, especially the cheap stuff. You won't notice anything until you get way down there to the tiny amounts - but if that's where you're working, you'd better keep it in mind.

Comments (24) + TrackBacks (0) | Category: Drug Development | Pharmacokinetics

March 25, 2008

Getting To Lyrica

Email This Entry

Posted by Derek

There’s an interesting article in Angewandte Chemie by Richard Silverman of Northwestern, on the discovery of Lyrica (pregabalin). It’s a rare example of a compound that came right out of academia to become a drug, but the rest of its story is both unusual and (in an odd way) typical.

The drug is a very close analog of the neurotransmitter GABA. Silverman’s lab made a series of compounds in the 1980s to try to inhibit the aminotransferase enzyme (GABA-AT) that breaks GABA down in the brain, as a means of increasing its levels to prevent epileptic seizures. They gradually realized, though, that their compounds were also hitting another enzyme, glutamic acid decarboxylase (GAD), which actually synthesizes GABA. Shutting down the neurotransmitter’s breakdown was a good idea, but shutting down its production at the same time clearly wasn’t going to work out.

So in 1988 a visiting Polish post-doc (Ryszard Andruszkiewicz) made a series of 3-alkyl GABA and glutamate analogs as another crack at a selective compound. None of them were particularly good inhibitors – in fact, most of them were substrates for GABA-AT, although not very good ones. But (most weirdly) they actually turned out to activate GAD, which would also work just fine to raise GABA levels. Northwestern shopped the compounds around because of this profile, and Parke-Davis took them up on it. One enantiomer of the 3-isobutyl GABA analog turned out to be a star performer in the company’s rodent assay for seizure prevention, and attempts to find an even better compound were fruitless. The next few years were spent on toxicity testing and optimizing the synthetic route.

The IND paperwork to go into humans was filed in 1995, and clinical trials continued until 2003. The FDA approved the drug in 2004, and no, that’s not an unusual timeline for drug development, especially for a CNS compound. And there you’d think the story ends – basic science from the university is translated into a big-selling drug, with the unusual feature of an actual compound from the academic labs going all the way. Since I’ve spent a good amount of time here claiming that Big Pharma doesn’t just rip off NIH-funded research, you’d think that this would be a good counterexample.

But, as Silverman makes clear, there’s a lot more to the story. As it turned out, the drug’s efficacy had nothing to do with its GABA-AT substrate behavior. But further investigation showed that it’s not even correlated with its activation of the other enzyme, GAD. None of the reasons behind the compound’s sale to Parke-Davis held up, except the biggest one: it worked well in the company’s animal models.

The biologists at P-D eventually figured out what was going on, up to a point. The compound also binds to a particular site on voltage-gated calcium channels. That turns out to block the release of glutamate, whose actions would be opposed to those of GABA. So they ended up in the same place (potentiation of GABA effects) but through a mechanism that no one suspected until after the compound had been recommended for human trials! There were more lucky surprises: Lyrica has excellent blood levels and penetration into the brain, while none of the other analogs came close. As it happened, and as the Parke-Davis folks figured out, the compound was taken up by active transport into the brain (via the System L transporter), which also helps account for its activity.

And Silverman goes on to show that while the compound was originally designed as a GABA analog, it doesn’t even perform that function. It has no binding to any GABA receptor, and doesn’t affect GABA levels in any way. As far as I can see, a really thorough, careful pharmacological analysis before going into animals would probably have killed the compound before it was even tested, which goes to show how easy it is to overthink a black-box area like CNS.

So on one level, this is indeed an academic compound that went to industry and became a drug. But looked at from another perspective, it was an extremely lucky shot indeed, for several unrelated reasons, and the underlying biology was only worked out once the compound went into industrial development. And from any angle, it’s an object lesson in how little we know, and how many surprises are waiting for us. (Silverman himself, among other things, is still in there pitching, looking for a good inhibitor of GABA aminotransferase. One such drug, a compound going back to 1977 called vigabatrin, has made it to market for epilepsy in a few countries, but has never been approved in the US because of retinal toxicity).

Comments (24) + TrackBacks (0) | Category: Academia (vs. Industry) | Drug Development | Pharmacokinetics | The Central Nervous System

January 29, 2008

The Animal Testing Hierarchy

Email This Entry

Posted by Derek

I've had some questions about animal models and testing, so I thought I'd go over the general picture. As far as I can tell, my experience has been pretty representative.

There are plenty of animal models used in my line of work, but some of them you see more than others. Mice and rats are, of course, the front line. I’ve always been glad to have a reliable mouse model, personally, because that means the smallest amount of compound is used to get an in vivo readout. Rats burn up more hard-won material. That's not just because they're uglier, since we don’t dose based on per cent ugly, but rather because they're much larger and heavier. The worst were some elderly rodents I came across years ago that were being groomed for a possible Alzheimer’s assay – you don’t see many old rats in the normal course of things, but I can tell you that they do not age gracefully. They were big, they were mean, and they were, well, as ratty as an animal can get. (They were useless for Alzheimer's, too, which must have been their final revenge).

You can’t get away from the rats, though, because they’re the usual species for toxicity testing. So if your pharmacokinetics are bad in the rat, you’re looking at trouble later on – the whole point of tox screens is to run the compound at much higher than usual blood levels, which in the worst cases you may not be able to reach. Every toxicologist I’ve known has groaned, though, when asked if there isn’t some other species that can be used – just this time! – for tox evaluation. They’d much rather not do that, since they have such a baseline of data for the rat, and I can’t blame them. Toxicology is an inexact enough science already.

It’s been a while since I’ve personally seen the rodents at all, though, not that I miss them. The trend over the years has been for animal facilities to become more and more separated from the other parts of a research site – separate electronic access, etc. That’s partly for security, because of people like this, and partly because the fewer disturbances among the critters, the better the data. One bozo flipping on the wrong set of lights at the wrong time can ruin a huge amount of effort. The people authorized to work in the animal labs have enough on their hands keeping order – I recall a run of assay data that had an asterisk put next to it when it was realized that a male mouse had somehow been introduced into an all-female area. This proved disruptive, as you’d imagine, although he seemed to weather it OK.

Beyond the mouse and rat, things branch out. That’s often where the mechanistic models stop, though – there aren’t as many disease models in the larger animals, although I know that some cardiovascular disease studies are (or have been) run in pigs, the smallest pigs that could be found. And I was once in on an osteoporosis compound that went into macaque monkeys for efficacy. More commonly, the larger animals are used for pharmacokinetics: blood levels, distribution, half-life, etc. The next step for most compounds after the rat is blood levels in dogs – that’s if there’s a next step at all, because the huge majority of compounds don’t get anywhere near a dog.

That’s a big step in terms of the seriousness of the model, because we don’t use dogs lightly. If you’re getting dog PK, you have a compound that you’re seriously considering could be a drug. Similarly, when a compound is finally picked to go on toward human trials, it first goes through a more thorough rat tox screen (several weeks), then goes into two-week dog tox, which is probably the most severe test most drug candidates face. The old (and cold-hearted) saying is that “drugs kill dogs and dogs kill drugs”. I’ve only rarely seen the former happen (twice, I think, in 19 years), but I’ve seen the second half of that saying come true over and over. Dogs are quite sensitive – their cardiovascular systems, especially – and if you have trouble there, you’re very likely done. There’s always monkey data – but monkey blood levels are precious, and a monkey tox screen is extremely rare these days. I’ve never seen one, at any rate. And if you have trouble in the dog, how do you justify going into monkeys at all? No, if you get through dog tox, you're probably going into man, and if you don't, you almost certainly aren't.

Comments (8) + TrackBacks (0) | Category: Animal Testing | Drug Assays | Drug Development | Pharmacokinetics | Toxicology

January 18, 2008

Eat It, Breath It, Soak in It?

Email This Entry

Posted by Derek

After Pfizer’s Exubera inhaled-insulin product died so horribly in the market last year, the other companies working in the same space had to be worried. Lilly and Alkermes have had a long-running program, as has a smaller company called Mannkind. But recently, another contender, Novo Nordisk, has announced that they and partner Aradigm have decided to cut their losses. The In Vivo Blog has an excellent roundup.

According to Novo’s CEO, they (like Pfizer) were focusing on prandial insulin because that was basically the only thing they could get to work through inhalation. Now that they’ve seen how well that went over, they’ve decided to spend the money on different proteins (basal insulin, glucagon-like-peptide 1 analogs, etc.) They have a GLP-1 analog in Phase III, but apparently are heading toward the clinic with a second-generation one that can work by the inhaled route.

I wish them luck. We really need new routes of administration for drugs, and every seemingly good candidate has some real problems. There’s a limit to how much compound you can administer transdermally through a patch, for example, and a limit to how quickly it can be administered. Long, slow, continuous delivery is fine, but no one’s going to be marketing an epinephrine patch for anaphylactic shock any time soon. Similarly, you can probably forget about antibiotic-sized total doses, too, because nobody’s skin has enough surface area. (I know, I know, on some people you might think it would work – but if you weigh a lot, you probably need more antibiotic to start with on a mg/kilo basis, and meanwhile your surface area goes up as a square while your volume goes up as a cube, and it’s a losing battle).

No, unless we find some way to make the skin crazily permeable, it’s never going to be a great delivery system. And crazily permeable is just what the skin is not, for good reason. That’s why pulmonary delivery makes sense, to a first approximation. The lungs have huge surface area, just like the small intestine does for oral dosing, because both those organs live to absorb things from the environment (as opposed to the skin). The lungs absorb a gas, unfortunately, as opposed to the small molecules absorbed by the intestines, but a gas is just a special subset of small molecule.

But there’s the downside of the idea. While an oral drug is piggybacking on machinery that’s doing what it’s supposed to be doing, lung delivery is making the organ do something it’s not. (Thus the idea of dosing peptides by this route, since the lungs aren’t a soup of proteolytic enzymes, and pulmonary circulation does not feed your compounds right into the sawmill of the liver). While the intestine absorbs all kinds of stuff, the lungs are there to absorb only one gas and excrete only one. And that primary function of oxygen / carbon dioxide transfer is rather vital, so if you’re going to horn in on it, you’d better be sure that you’re not going to degrade things.

That’s always been the worry with inhalation dosing. We can get around the acute problem of choking the patients, but the chronic problem of potential lung damage is always a worry. Lung function varies quite a bit, too, even under normal conditions, That variation is both patient-to-patient and from time to time – how do you take your inhaled medicine when you have a chest cold, or if you pull a muscle? (And that’s another reason why it’s sort of a grim cosmic joke that insulin turns out to be the big test for peptide drug delivery through the lungs, since its safe dosing window can be so narrow).

I’ll go into the ups and downs of other potential administration routes in another post. Most of them involve sharp objects, though, so they take on a certain similarity, and have the same only-if-I-have-to reputation.

Comments (3) + TrackBacks (0) | Category: Diabetes and Obesity | Drug Development | Pharmacokinetics

August 28, 2007

Like Clockwork

Email This Entry

Posted by Derek

There are a lot of drug development issues that people outside the field (and beginning medicinal chemists) don't think about. A significant one that sounds trivial is how often your wonder drug is going to be taken.

Once a day is the standard, and it's generally what we shoot for unless there's some reason to associate the drug with meals, sleep/wake cycles, or the like. People can remember to take something once a day - well, they remember it better than most of the other dosing schedules, anyway. That's why you actually want your compounds to be metabolized and cleared - everything has to be ready for the next dose tomorrow.

If your compound has a long half-life in the body after dosing, you'll step on the tail end of the last dose and you can see gradual accumulation of the drug in plasma or other tissues. And that's almost always a bad thing, because eventually every drug in the world is going to do something that you don't want. All you have to do is get the concentration up too high for too long (and figuring out what's too high and what's too long is the one-sentence job description of a toxicologist). If you stairstep your way up with accumulating doses, you'll get there in the end.

Ah, you might say, then just take the drug every other day. Simple! Sorry. Every other day (or every three, or four) is a complete nightmare for patient compliance. People lose track, and doctors know it. You'd better have a really compelling reason to go ahead with a weird regiment like that, and if you do, someone's going to seize the chance to come into your market with a once-a-day as soon as they can find one. (The exceptions to this are drugs given in a clinic, like many courses of chemotherapy - but in those cases, someone else is keeping track).

How about more often than once a day (q.d., in the Latin lingo). Well, twice a day (b.i.d. can work if it's morning/night. Three times a day can go with meals, presumably, but people are going to get tired of seeing your pills. More than three times a day? There'd better be a reason, and it had better be good.

So don't be scared as you watch your compounds disappear after giving them to the animals. You want that. Just not too quickly, and not too slowly, either.

Comments (19) + TrackBacks (0) | Category: Drug Development | Pharma 101 | Pharmacokinetics

April 10, 2007

Sulfur, Your Pal. Mostly.

Email This Entry

Posted by Derek

I had a question the other day in my e-mail about various sulfur-containing functional groups in drugs. My answers, condensed, were as follows:

Sulfides: will always be under suspicion for oxidation in vivo. If that's your main mode of metabolism and clearance, though, then the problem can be manageable. Still, many people avoid them to not have to deal with the whole issue, and I can't blame them. I do the same. Since the reagents needed to prepare them tend to reek, it's a handy bias to have.

Sulfoxides: I spent quite a while on an old project turning out a whole line of these. I'm not sure if I'd do that again, though. Sulfoxides are interestingly polar, but they're also frustratingly chiral. If you need a specific right-hand or left-hand sulfoxide (and I did!), there are numerous not-always-appealing ways to get them. The other worry about them is that they can get either oxidized (up to the sulfone) or reduced back down to the sulfide. A good example of this problem is in the -prazole proton pump inhibitor drugs, which are probably the most prominent sulfoxides on the market. Some of them (like omeprazole) get oxidized, and others (like rabeprazole) get reduced. I've even heard of a chiral sulfoxide going in vivo and coming back out in the urine as the other enantiomer, via reduction and chiral oxidation. Many people prefer to avoid the whole issue - and after my experiences, I can't say I blame them here, either.

Sulfone: finally, a metabolically stable one. Sulfones have a reputation as rock-solid functional groups, at least when there aren't active hydrogens next to them. Of course, sometimes the compounds are also stable rocks that don't like to dissolve, but we have that problem with everything. I haven't come across anyone with an unkind word for sulfones.

Sulfonamides: If you're an experienced medicinal chemist, boy, have you cranked out some sulfonamides in your time. They're just so easy to make, and you can get so much structural variation out of them. But secondary ones (with a free NH) can get you into trouble in vivo, since they're so acidic. Acidic compounds can behave weirdly when they try to cross out of the gut or into cells, and have a reputation for hanging around in the blood forever. My bias has always been to go with sulfonamides that have fully substituted nitrogens, and I say let 'em rip.

So, those are my biases. Readers are invited to unload their buried feelings about sulfur functionality in the comments.

Comments (12) + TrackBacks (0) | Category: Life in the Drug Labs | Pharmacokinetics

December 6, 2006

Bigger And Greasier

Email This Entry

Posted by Derek

Several people have remarked on how large and greasy a molecule torcetrapib is, and speculated about whether that could have been one of its problems. Now, I have as much dislike of large and greasy molecules as any good medicinal chemist, but somehow I don't think that was the problem here.

For the non-medicinal-chemists, the reason we're suspicious of those things is that the body is suspicious of them, too. There aren't all that many non-peptidic, non-carbohydrate, non-lipid, non-nucleic acid molecules in the body to start with - those categories take care of an awful lot of what's available, and they're all handled by their own special systems. A drug molecule is an interloper right from the start, and living organisms have several mechanisms designed to seek out and destroy anything that isn't on the guest list.

An early line of defense is the gut wall. Molecules that are too large or too hydrophobic won't even get taken up well. The digestive system spends most of its time breaking everything down into small polar building blocks and handing them over to the portal circulation, there to be scrutinized by the liver before heading out into the general circulation. So anything that isn't a small polar building block had better be ready to explain itself. There are dedicated systems that handle absorption of fatty acids and cholesterol, and odds are that they're not going to recognize your greaseball molecule. It's going to have to diffuse in on its own, which puts difficult to define, but nonetheless real limits on its size and polarity.

Then there's that darn liver. It's full of metabolizing enzymes, many of which are basically high-capacity shredding machines with binding sites that are especially excellent for nonpolar molecules. That first-pass metabolism right out of the gut is a real killer, and many good drug candidates don't survive it. For many (most?) others, destruction by liver enzymes is still the main route of clearance.

Finally, hydrophobic drug molecules can end up in places you don't want. The dominant solvent of the body is water, of course, albeit water with a lot of gunk in it. But even at their thickest, biological fluids are a lot more aqueous than not, especially when compared to the kinds of solvents we tend to make our molecules in. A hydrophobic molecule will stick to all sorts of things (like the greasier exposed parts of proteins) rather than wander around in solution, and this can lead to unpredictable behavior (and difficulty getting to the real target).

That last paragraph is the one that could be relevant to torcetrapib's failure. The others had already been looked at, or the drug wouldn't have made it as far as it did. But the problem is that for a target like CETP, a greasy molecule may be the only thing that'll work. After all, if you're trying to mess up a system for moving cholesteryl esters around, your molecule may have to adopt a when-in-Rome level of polarity. The body may be largely polar, but some of the local environments aren't. The challenge is getting to them.

Comments (19) + TrackBacks (0) | Category: Cardiovascular Disease | Drug Development | Pharmacokinetics | Toxicology

August 16, 2004

Clay Lies Still, But Blood's A Rover

Email This Entry

Posted by Derek

When a drug makes it into the bloodstream (which is no sure thing, on my side of the business), it doesn't just float around by itself. Blood itself is full of all kinds of stuff, and there are many things in it that can interact with drug molecules.

For one thing, compounds can actually wander in and out of red blood cells. This usually isn't a big deal, but once in a while a compound will find a binding site in there, which had flippin' well better not be on the hemoglobin protein. Depending on the on- and off-rates, this can either add a welcome time-release feature to the dosing or it can be a real pain. I haven't heard as much about interactions with white cells, but since they're a much smaller fraction of the total blood it's not something we'd be likely to notice.

More commonly, drugs stick to some sort of plasma protein. The most common one is serum albumin, and another big player is alpha-1 acid glycoprotein, or AGP. Albumin's found in large amounts and has several distinct binding sites. Acidic drugs are well known to hold on to it. As far as I'm aware, no one's absolutely sure what it's there for, but it must be pretty important. The multiple binding sites make it seem like could be some sort of concentration buffer for several different substances, but which ones? (I've never heard of an albumin knockout mouse - I assume that it would be lethal.)

The same comments about good and bad effects apply. A lot of effort has gone into schemes to predict plasma protein behavior, with success that I can charitably describe as "limited."The real test is to expose your compounds to fresh blood and see if you can get them back out. Some degree of protein binding is welcome, and you can go on up to 99% without seeing any odd effects. But at 99-and-some-nines you can start to assume that something is wrong, and that the interaction is too tight for everyone's good.

But when you're doing your blood assay, you had better make sure to try it with all the species that you're going to be dosing in. There's a kinase inhibitor from a few years back called UCN-01 that provides a cautionary tale. It was dosed up to high levels in rats and dogs, wasn't bad, and passed its toxicology tests, and went into human trials. They started out at one-tenth the maximum tolerated rat dose in the Phase I volunteers, which should be a good margin. But when they got the blood samples worked up, everyone just about fell out of their chairs.

There was at least ten times the amount of drug circulating around than they'd expected, because it was all stuck to AGP and it just wasn't coming off. A single dose of the drug had a half-life in humans of about 45 days, which must be some sort of record. Well, you might think, what's the problem. A once-a-month drug, right? But it doesn't work like that: the compound was so tightly bound that it would never reach the tumor cells that it was supposed to treat. All it was doing was just riding around in the blood. And the clinical program really dodged one from the safety perspective, too, because as they escalated the dose they would have eventually saturated all the binding that the AGP had to offer. Then the next higher dose would have dumped a huge overage of free drug into the blood, and all at once. Not what you're looking for.

The compound is still being investigated, but it's having a rough time of it. It's been in numerous Phase I trials, with all sorts of dosing schedules. A look through the literature shows that the compound is mainly being used as a tool in cell assays, where there's no human AGP to complicate things. With so many kinase inhibitors out there being developed, it's going to be hard to find a place for one with such weird behavior.

Comments (5) + TrackBacks (0) | Category: Cancer | Pharmacokinetics

May 25, 2004

Down the Hatch

Email This Entry

Posted by Derek

We have a lot of received wisdom in the drug business, rules of thumb and things that everybody knows. One of the things that we all know is that the gut wall isn't much fun for our drugs to get across sometimes. That's inconvenient, since most people would prefer to swallow their medicine rather than take part in the more strenuous dosage forms.

Go around asking random medicinal chemists about oral absorption of drugs, and you'll get more things that everyone knows. There will be lots of talk about solubility and allied topics like particle size, salt forms, formulations and so on. Some of this is valid (I'd vote for particle size), but some of it is hooey. For example, I'm not convinced that solubility has much to do with oral dosing (once you get past the powdered-glass stage, naturally.) I've had wonderfully soluble drug candidates that went nowhere, and I've had brick dust that showed reasonable blood levels. I'm just barely willing to admit that there's a trend (in a really wide data set), but I'm not willing to admit that it's a very useful trend. But solubility can be measured (over and over!), so there's a constituency for it.

You'll also get a lot of stuff about P-glycoprotein, and the necessity of doing some sort of cellular assay to see if your compound is affected by it. That's a protein I've spoken about from time to time, which sits in the cell membrane and pumps a variety of compounds from one side to the other. Now, Pgp is a real thing, both in the gut and in the brain. But there are a lot more transporter proteins out there than most of us realize, hundreds and hundreds of the damn things, and we don't have much of a handle on them. I think that they're a big opportunity for drug development in the coming years, assuming we start to get a clue.

People get excited about Pgp because it was one of the first ones characterized, and because it does seem to explain the failure of a few drugs. There's a cellular assay, using the famous Caco-2 colon cells that express the protein, which is supposed to give you some idea of Pgp's effect on the membrane permeability of cour compounds. Unfortunately, I'm not convinced that it gives you much more than a reading of how they behave in the Caco-2 assay, which probably isn't worth knowing for its own sake, to put it kindly. But folks are so desperate to know why their drugs don't get absorbed well (and how they can avoid wasting any more of their working lives on such) that they'll seize on any technique that offers hope.

You'll also hear about metabolism of drug by enzymes in the gut wall, but as far as I can see, that's an overrated fear. (There was a review article on this a few years back from a group at Merck, and that's what they concluded.) People like this explanation because it makes some sense. We all know about liver enzymes ripping our compounds to bits, and here they are in the gut wall! No wonder our compounds stink! And this is also something you can screen for, so you're not left sitting there alone with the black box. Far better to be able to tell everyone that you think you have a handle on the problem and that you're running assays to get around it, even if it isn't true.

Nope, our understanding of drug absorption still reeks of voodoo vapors, despite many attempts at exorcism. It's annoying and it's disturbing, but it's the state of the art. Anyone that can do better will make a fortune.

Comments (10) + TrackBacks (0) | Category: Drug Development | Pharmacokinetics

April 18, 2004

The March of Folly Leader Board

Email This Entry

Posted by Derek

The first comment to the original March of Folly post below mirrors the e-mail I've received: the people's choice for the technology most-likely-to-be-embarrassing is. . .(rustling of envelope): RNA interference.

There's a good case to be made for that, and it doesn't contradict my oft-stated opinion that RNAi is going to be good for one or more Nobel prizes. The big challenge will be how to divide things up correctly - we may well see some spillover into Chemistry from the Medicine/Physiology category. Believe me, there are several folks who should keep their eyes open for discount fares to Stockholm. This will probably happen in about five years or so, given the usual pace of the Nobel folks.

But industrial enthusiasm for RNAi may well have gotten out of hand in the last year or two. There are a number of small companies frantically trying to take the technique into the clinic; the whole thing reminds everyone of the heyday of antisense therapeutics. Remember antisense DNA? People are still out there trying to make it work, but it's been a lot harder than anyone would have wanted to believe. If you'd been able to show folks the future back in the late 1980s, a bunch of venture capitalists would have had rug-biting fits.

And RNA-based therapies suffer from almost exactly the same problems, and for the same reasons. Delivery of the molecule and its stability once dosed are going to be very tricky. One of the first things being targeted is macular degeneration, because the inside of the eye is a rather tranquil pond, pharmacokinetically speaking, and the cells there are known to take things up rather freely. But once you get out of that best-case situation, well, good luck. With any luck, RNAi might be able to adapt a successful antisense technique - if someone finds one.

Comments (5) + TrackBacks (0) | Category: Drug Development | Drug Industry History | Pharmacokinetics

February 18, 2004

How Drugs Die

Email This Entry

Posted by Derek

Everyone in the industry would like to do something about the failure rate of drugs in clinical trials. It would be far better to have not spent the time and money on these candidates, and the regret just increases as you move further down the process. A Phase I failure is painful; a Phase III failure can affect the future of the whole company.


So why do these drugs fall out? Hugo Kubinyi, in last August's Nature Revews Drug Discovery suggests that it's not for the reasons that we think. As he notes, there are two widely cited studies that have suggested that a good 40% of clinical failures are due to poor pharmacokinetics. That area is also known in the trade as ADME, for Absorption, Distribution, Metabolism, and Excretion, for the four things that happen to a drug once it's dosed. And we have an awful time predicting all four of them.


Of the four, we have the best handle on metabolism. In the preclinical phase, we expose compounds to preparations from human liver cells, and that gives a useful guide to what's going to happen to them in man. We also expose advanced compounds to human liver tissue itself, which isn't exactly a standard item of commerce, but serves as a more exacting test. Most of the time, these (along with animal studies) keep us from too many surprises about how a compound is going to be broken down. But the other three categories are very close to being black boxes. Dosing in dogs is considered the best model for oral dosing in humans for these, but there are still surprises all the time.


That 40% figure has inspired a lot of hand-wringing, and a lot of expenditure. But Kubinyi says that it's probably wrong. Going back over the data sets, he says that the sample set is skewed by the inclusion of an inappapropriately large group of anti-infective compounds with poor properties. If you adjust to a real-world proportion, you get an ADME failure rate of only 7%.


Now, when this paper came out, I think that there was consternation all over the drug industry. (There sure was among some of my co-workers.) The ADME problem has been common knowledge for years now, it was disturbing to think that it wasn't even there. So disturbing, it seems, that many people have just decided to ignore Kubinyi's contention and carry on as if nothing had happened. There have been big investments in ways to model and predict these properties, and I think that many of these programs have a momentum of their own, which might not be slowed down by mere facts.


The natural question is what Kubinyi thinks might be our real problem. In his adjusted data set, 46% of all failures result from lack of efficacy in Phase II. He admits that some of these (in either approach to the data) might still reflect bad pharmacokinetics, but still maintains that poor PK has made a much smaller contribution than everyone believes. Here's his drug development failure breakdown, which makes his point:


46% drop out from lack of efficacy
17% from animal toxicity (beyond the usual preclinical tox)
16% from adverse events in humans
7% from bad ADME properties
7% from commercial decisions
7% from other miscellaneous reasons

Comments (0) + TrackBacks (0) | Category: Drug Assays | Drug Development | Pharmacokinetics | Toxicology