About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
December 18, 2014
I had an interesting note this morning from a reader who's been asked about writing an introductory textbook on medicinal chemistry. He's been looking over the field, and wondering what to include (and what to leave out). Quite a few current med-chem texts have a fairly robust section on combichem, for example, but many of these seem out of proportion to the technique's current importance.
So what would you deemphasize if you were writing one of these? And what would you give more time to? (We'll take the obligatory references to finding jobs in the field as already having been made!)
+ TrackBacks (0) | Category: Pharma 101
September 22, 2014
Erland Stevens at Davidson is going to be running an online med-chem course on EdX, the MOOC platform founded by Harvard and MIT. It starts in October, runs for 8 weeks, can be audited for free, and covers these topics:
(1) The drug approval process (early drugs, clinical trials, IP factors)
(2) Enzymes and receptors (inhibition, Ki, types of ligands, Kd)
(3) Pharmacokinetics (Vd, CL, compartment models)
(4) Metabolism (phase I and II, genetic factors, prodrugs)
(5) Molecular diversity (drug space, combi chem, libraries)
(6) Lead discovery (screening, filtering hits)
(7) Lead optimization (FG replacements, isosteres, peptidomimetics)
(8) Important drug classes (selected examples)
So if you know someone who would like to have a better understanding of the basics of med-chem and has been looking for an opportunity, this might be the answer. Stevens taught this one in the spring on the same platform, and had 14,000 people sign up at the beginning.
Update: from the comments, there's another med-chem course starting on Coursera shortly, from UCSD: https://www.coursera.org/course/drugdiscovery.
+ TrackBacks (0) | Category: Pharma 101
January 23, 2013
Reader Andy Breuninger, from completely outside the biopharma business, sends along what I think is an interesting question, and one that bears on a number of issues:
A question has been bugging me that I hope you might answer.
My understanding is that a lot of your work comes down to taking a seed molecule and exploring a range of derived molecules using various metrics and tests to estimate how likely they are to be useful drugs.
My question is this: if you took a normal seed molecule and a standard set of modifications, generated a set of derived molecules at random, and ate a reasonable dose of each, what would happen? Would 99% be horribly toxic? Would 99% have no effect? Would their effects be roughly the same or would one give you the hives, another nausea, and a third make your big toe hurt?
His impression of drug discovery is pretty accurate. It very often is just that: taking one or more lead compounds and running variations on them, trying to optimize potency, specificity, blood levels/absorption/clearance, toxicology, and so on. So, what do most of these compounds do in vivo?
My first thought is "Depends on where you start". There are several issues: (1) We tend to have a defined target in mind when we pick a lead compound, or (if it's a phenotypic assay that got us there), we have a defined activity that we've already seen. So things are biased right from the start; we're already looking at a higher chance of biological activity than you'd have by randomly picking something out of a catalog or drawing something on a board.
And the sort of target can make a big difference. There are an awful lot of kinase enzymes, for example, and compounds tend to cross-react with them, at least in the nearby families, unless you take a lot of care to keep that from happening. Compounds for the G-protein coupled biogenic amines receptors tend to do that, too. On the other hand, you have enzymes like the cytochromes and binding sites like the aryl hydrocarbon receptor - these things are evolved to recognize all sorts of structually disparate stuff. So against the right (or wrong!) sort of targets, you could expect to see a wide range of potential side activities, even before hitting the random ones.
(2) Some structural classes have a lot more biological activity than others. A lot of small-molecule drugs, for example, have some sort of basic amine in them. That's an important recognition element for naturally occurring substances, and we've found similar patterns in our own compounds. So something without nitrogens at all, I'd say, has a lower chance of being active in a living organism. (Barry Sharpless seems to agree with this). That's not to say that there aren't plenty of CHO compounds that can do you harm, just that there are proportionally more CHON ones that can.
Past that rough distinction, there are pharmacophores that tend to hit a lot, sometimes to the point that they're better avoided. Others are just the starting points for a lot of interesting and active compounds - piperazines and imidazoles are two cores that come to mind. I'd be willing to bet that a thousand random piperazines would hit more things than a thousand random morpholines (other things being roughly equal, like molecular weight and polarity), and either of them would hit a lot more than a thousand random cyclohexanes.
(3) Properties can make a big difference. The Lipinski Rule-of-Five criteria come in for a lot of bashing around here, but if I were forced to eat a thousand random compounds that fit those cutoffs, versus having the option to eat a thousand random ones that didn't, I sure know which ones I'd dig my spoon into.
And finally, (4): the dose makes the poison. If you go up enough in dose, it's safe to say that you're going to see an in vivo response to almost anything, including plenty of stuff at the supermarket. Similarly, I could almost certainly eat a microgram of any compound we have in our company's files with no ill effect, although I am not motivated to put that idea to the test. Same goes for the time that you're exposed. A lot of compounds are tolerated for single-dose tox but fail at two weeks. Compounds that make it through two weeks don't always make it to six months, and so on.
How closely you look makes the poison, too. We find that out all the time when we do animal studies - a compound that seems to cause no overt effects might be seen, on necropsy, to have affected some internal organs. And one that doesn't seem to have any visible signs on the tissues can still show effects in a full histopathology workup. The same goes for blood work and other analyses; the more you look, the more you'll see. If you get down to gene-chip analysis, looking at expression levels of thousands of proteins, then you'd find that most things at the supermarket would light up. Broccoli, horseradish, grapefruit, garlic and any number of other things would kick a full expression-profiling assay all over the place.
So, back to the question at hand. My thinking is that if you took a typical lead compound and dosed it at a reasonable level, along with a large set of analogs, then you'd probably find that if any of them had overt effects, they would probably have a similar profile (for good or bad) to whatever the most active compound was, just less of it. The others wouldn't be as potent at the target, or wouldn't reach the same blood levels. The chances of finding some noticeable but completely different activity would be lower, but very definitely non-zero, and would be wildly variable depending on the compound class. These effects might well cluster into the usual sorts of reactions that the body has to foreign substances - nausea, dizziness, headache, and the like. Overall, odds are that most of the compounds wouldn't show much, not being potent enough at any given target, or getting high enough blood levels to show something, but that's also highly variable. And if you looked closely enough, you'd probably find that that all did something, at some level.
Just in my own experience, I've seen one compound out of a series of dopamine receptor ligands suddenly turn up as a vasodilator, noticeable because of the "Rudolph the Red-Nosed Rodent" effect (red ears and tail, too). I've also seen compound series where they started crossing the blood-brain barrier more more effectively at some point, which led to a sharp demarcation in the tolerability studies. And I've seen many cases, when we've started looking at broader counterscreens, where the change of one particular functional group completely knocked a compound out of (or into) activity in some side assay. So you can never be sure. . .
+ TrackBacks (0) | Category: Drug Assays | Drug Development | Pharma 101 | Pharmacokinetics | Toxicology
October 23, 2012
There was a question in the comments from a reader who's picking up med-chem, and I thought it was worth answering out here. I've been meaning to shore up the "Pharma 101" category, and this is a good opportunity. So how, in a case like that compound in the previous post, do you increase a compound's half-life?
The first thing to do is try to figure out why it's so short. That's almost certainly due to the compound being metabolized and excreted - once in a while, you'll find a compound that quietly partitions into some tissue and hides out, but for the most part, a disappearing compound is getting chewed up and spit out. For one that's being injected like this, you'd want to look in the blood for metabolites, and in the urine for those and the parent compound, and try to see how much you can account for. No point in checking feces or the bile contents - if this thing were dosed orally, though, you'd definitely not ignore those possibilities.
Looking for metabolites is something of a black art. There are plenty of standard things to check, like the addition of multiples of 16 (for oxidations). Examination of the structure can give you clues as well. I'd consider what pieces I'd see after cleavage of each of those amide bonds, for example, and look for those (and their oxidation products). The bromine and iodine will help you track things down in the mass spec, for sure. That phenol over on the right-hand side is a candidate for glucuronidation (or some other secondary metabolite), either of the parent or some piece thereof, so you'd want to look for those. Same thing could happen to some of the free acids after cleavage of the amides. And I have no idea what that difluorophosphonate does, but I'd be rooting through the PK literature to find out what such things have done in the past.
If you can establish some major metabolic routes, then you can think about hardening the structure. What if some of those amides are N-methylated, for example? Can you do that without killing the binding? Would putting another atom on the other side of the phenol affect its conjugation? There are all sorts of tricks, mostly involving steric hindrance and/or changing electron density around some hot spot.
Update: a commenter notes that I've left out prodrugs, and that's quite right. A prodrug is a sort of deliberate metabolism. You put in a group that gets slowly cleaved off, liberating the active compound - esters are a favorite strategy of this sort. Much of the time, a prodrug is put on to improve the solubility and/or absorption of a compound (that is, something polar and soluble grafted onto a brick), but they can certainly influence half-life, too.
The other major strategy is formulation. If you really can't shore up your structure, or if that isn't enough, then you can think about some formulation that'll deliver your compound differently. Would some sort of slow-release help? These things are trickier with injectables than they are with oral medications, from what experience I've had, but there are still things that can be done.
So that's a short answer - there are, of course, a lot of details involved, and a lot of tricks that have been developed over the years. But that's one way to start.
+ TrackBacks (0) | Category: Pharma 101 | Pharmacokinetics
June 8, 2012
I gave my talk at the Drew University Medicinal Chemistry course, and it got me to thinking about when I was there (1990 or 1991), and my early days in medicinal chemistry in general. There are a lot of things that have to be learned when coming out of a synthetic organic chemistry background, and a few that have to be unlearned. I've written about some of these in the past, but I wanted to bring together some specific examples:
1. I had to appreciate just how strange and powerful metabolizing enzymes are. I approached them from the standpoint of an organic chemist, but p450 enzymes can epoxidize benzene, and I don't know any organic chemists that can do that too well. Ripping open piperazine rings, turning cyclohexanes into cyclohexanols - there are a lot of reactions that are common in metabolic clearance that are not, to put it lightly, part of the repetoire of synthetic organic chemistry.
2. I also had to learn a rough version of the Lipinski rules - basically, that physical properties matter, although the degree to which they matter can vary. You can't increase molecular weight or lipophilicity forever without paying for it. Small polar molecules are handled fundamentally differently than big greasy ones in vivo. This was part of learning that there are many, many different potential fates for small molecules when dosed into a living animal.
3. Another key realization, which took a while to sink in, was that biological assays had error bars, and that this was true whether or not error bars were presented on the page or the screen. Enzyme assays were a bit fuzzy compared to the numbers I was used to as a chemist, but cell assays were fuzzier. And whole-animal numbers covered an even wider range. I had to understand that this hierarchy was the general rule, and that there was not a lot to be done about it in most cases (except, importantly, to never forget that it was there).
4. As someone mentioned in the comments here the other day, alluding to an old post of mine, I had to learn that although I'd been hearing for years that time was money, that grad school had been a poor preparation for learning how true that was. I was used to making everything that I could rather than buying it, but I had to reverse that thinking completely, since I was being paid to use my head more than my hands. (That didn't mean that I shouldn't use my hands, far from it - only that I should use my head first whenever feasible).
5. I also had to figure out how to use my time more efficiently. Another bad grad school habit was the working all hours of the day routine, which tended to make things stretch out. Back then, if I didn't get that reaction set up in the afternoon, well, I was coming back that evening, so I could do it then. But if I was going to keep more regular working hours, I had to plan things out better to make the best use of my time.
6. There were several big lessons to be learned about where chemistry fit into the whole drug discovery effort. One was that if I made dirty compounds, only dirty results could be expected from them. As mentioned above, even clean submissions gave alarmingly variable results sometimes; what could be expected from compounds with large and variable impurities from prep to prep? One of my jobs was not to make things harder than they already were.
7. A second big lesson, perhaps the biggest, was that chemistry was (and is) a means to an end in drug discovery. The end, of course, is a compound that's therapeutically useful enough that people are willing to pay money for it. Without one or more of those, you are sunk. It follows, first, that anything that does not bear on the problems of producing them has to be considered secondary - not unimportant, perhaps, but secondary to the biggest issue. Without enough compounds to sell, everything else that might look so pressing will, in fact, go away - as will you.
8. The next corollary is that while synthetic organic chemistry is a very useful way to produce such compounds, it is not necessarily the only way. Biologics are an immediate exception, of course, but there are more subtle ones. One of the trickier lessons a new medicinal chemist has to learn is that the enzymes and receptors, the cells and the rats, none of them are impressed by your chemical skills and your knowledge of the literature. They do not care if the latest compound was made by the most elegant application of the latest synthetic art, or by the nastiest low-yielding grunt reaction. What matters is how good that compound might be as a drug candidate, and the chemistry used to make it usually (and should) get in line behind many more important considerations. "Quickly", "easily", and "reproducibly", in this business, roughly elbow aside the more academic chemical virtues of "complexly", "unusually", and "with difficulty".
+ TrackBacks (0) | Category: Academia (vs. Industry) | How To Get a Pharma Job | Pharma 101
April 27, 2012
So how do drug molecules (and others) get into cells, anyway? There are two broad answers: they just sort of slide in through the membranes on their own (passive diffusion), or they're taken up by pores and proteins built for bringing things in (active transport). I've always been taught (and believed) that both processes can be operating in most situations. If the properties of your drug molecule stray too far out of the usual range, for example, your cell activity tends to drop, presumably because it's no longer diffusing past the cell membranes. There are other situations where you can prove that you're hitching a ride on active transport proteins, by administering a known inhibitor of one of these systems to cells and watching your compound suddenly become inactive, or by simply overloading and saturating the transporter.
There's another opinion, though, that's been advanced by Paul Dobson and Douglas Kell at Manchester, and co-workers. Their take is that carrier-mediated transport is the norm, and that passive diffusion is hardly important at all. This has been received with varying degrees of belief. Some people seem to find it a compelling idea, while others regard it as eccentric at best. The case was made a few years ago in Nature Reviews Drug Discovery, and again more recently in Drug Discovery Today:
All cells necessarily contain tens, if not hundreds, of carriers for nutrients and intermediary metabolites, and the human genome codes for more than 1000 carriers of various kinds. Here, we illustrate using a typical literature example the widespread but erroneous nature of the assumption that the ‘background’ or ‘passive’ permeability to drugs occurs in the absence of carriers. Comparison of the rate of drug transport in natural versus artificial membranes shows discrepancies in absolute magnitudes of 100-fold or more, with the carrier-containing cells showing the greater permeability. Expression profiling data show exactly which carriers are expressed in which tissues. The recognition that drugs necessarily require carriers for uptake into cells provides many opportunities for improving the effectiveness of the drug discovery process.
That's one of those death-or-glory statements: if it's right, a lot of us have been thinking about these things the wrong way, and missing out on some very important things about drug discovery as well. But is it? There's a rebuttal paper out in Drug Discovery Today that makes the case for the defense. It's by a long list of pharmacokinetics and pharmacology folks from industry and academia, and has the air of "Let's get this sorted out once and for all" about it:
Evidence supporting the action of passive diffusion and carrier-mediated (CM) transport in drug bioavailability and disposition is discussed to refute the recently proposed theory that drug transport is CM-only and that new transporters will be discovered that possess transport characteristics ascribed to passive diffusion. Misconceptions and faulty speculations are addressed to provide reliable guidance on choosing appropriate tools for drug design and optimization.
Fighting words! More of those occur in the body of the manuscript, phrases like "scientifically unsound", "potentially misleading", and "based on speculation rather than experimental evidence". Here's a rundown of the arguments, but if you don't read the paper, you'll miss the background noise of teeth being ground together.
Kell and Dobson et al. believe that cell membrane have more protein in them, and less lipid, than is commonly thought, which helps make their case for lots of protein transport/not a lot of lipid diffusion. But this paper says that their figures are incorrect and have been misinterpreted. Another K-D assertion is that artificial lipid membranes tend to have many transient aqueous pores in them, which make them look more permeable than they really are. This paper goes to some length to refute this, citing a good deal of prior art with examples of things which should have then crossed such membranes (but don't), and also find fault with the literature that K-D used to back up their own proposal.
This latest paper then goes on to show many examples of non-saturatable passive diffusion, as opposed to active transport, which can always be overloaded. Another big argument is over the agreement between different cell layer models of permeability. Two of the big ones are Caco-2 cells and MDCK cells, but (as all working medicinal chemists know) the permeability values between these two don't always agree, either with each other or with the situation in living systems. Kell and Dobson adduce this as showing the differences between the various transporters in these assays, but this rebuttal points out that there are a lot of experimental differences between literature Caco-2 and MDCK assays that can kick the numbers around. Their take is that the two assays actually agree pretty well, all things considered, and that if transporters were the end of the story that the numbers would be still farther apart.
The blood-brain barrier is a big point of contention between these two camps. This latest paper cites a large pile of literature showing that sheer physical properties (molecular weight, logP) account for most successful approaches to getting compounds into the brain, consistent with passive diffusion, while examples of using active transport are much more scarce. That leads into one of the biggest K-D points, which seems to be one of the ones that drives the existing pharmacokinetics community wildest: the assertion that thousands of transport proteins remain poorly characterized, and that these will come to be seen as the dominant players compared to passive mechanisms. The counterargument is that most of these, as far as we can tell to date, are selective for much smaller and more water-soluble substances than typical drug molecules (all the way from metal ions to things like glycerol and urea), and are unlikely to be important for most pharmaceuticals.
Relying on as-yet-uncharacterized transporters to save one's argument is a habit that really gets on the nerves of the Kell-Dobson critics as well - this paper calls it "pure speculation without scientific basis or evidence", which is about as nasty as we get in the technical literature. I invite interested readers to read both sides of the argument and make up their own minds. As for me, I fall about 80% toward the critics' side. I think that there are probably important transporters that are messing with our drug concentrations and that we haven't yet appreciated, but I just can't imagine that that's the whole story, nor that there's no such thing as passive diffusion. Thoughts?
+ TrackBacks (0) | Category: Drug Assays | Pharma 101 | Pharmacokinetics
March 2, 2010
I was just talking about greasy compounds the other day, and reasons to avoid them. Right on cue, there's a review article in Expert Opinion in Drug Discovery on lipophilicity. It has some nice data in it, and I wanted to share a bit of it here. It's worth noting that you can make your compounds too polar, as well as too greasy. Check these out - the med-chem readers will find them interesting, and who knows, others might, too:
So, what are these graphs? They show how well compound cross the membranes of Caco-2 cells, a standard assay for permeability. These cells (derived from human colon tissue) have various active-transport pumps going (in both directions), and you can grow them in a monolayer, expose one side to a solution of drug substance, and see how much compound appears on the other side and how quickly. (Of course, good old passive diffusion is also operating, too - a lot of compounds cross membranes by just soaked on through them).
Now, I have problems with extrapolating Caco-2 data too vigorously to the real world - if you have five drug candidates from the same series and want to rank order them, I'd suggest getting real animal data rather than rely on the cell assay. The array of active transport systems (and their intrinsic activity) may well not match up closely enough to help you - as usual, cultured cell lines don't necessarily match reality. But as a broad measure of whether a large set of compounds has a reasonable chance of getting through cell membranes, the assay's not so bad.
First, we have a bunch of compounds with molecular weights between 350 and 400 (a very desirable space to occupy). The Y axis is the partitioning between the two sides of the cells, and X axis is LogD, a standard measure of compound greasiness. That thin blue line is the cutoff for 100 nanomoles/sec of compound transport, so the green compounds above it travel across the membrane well, and the red ones below it don't cross so readily. You'll note that as you go to the left (more and more polar, as measured by logD), the proportion of green compounds gets smaller and smaller. They're rather hang out in the water than dive through any cell membranes, thanks.
So if you want a 50% chance of hitting that 100 nm/sec transport level, then you don't want to go much more polar than a LogD of 2. But that's for compounds in the 350-400 weight range - how about the big heavyweights? Those are shown in the second graph, for compounds greater than 500. Note that the distribution has scrunched disturbingly. Now almost everything is lousy, and if you want that 50% chance of good penetration, you're going to have to get up to a logD of at least 4.5.
That's not too good, because you're always fighting a two-front war here. If you make your compounds that greasy (or more) to try to improve their membrane-crossing behavior, you're opening yourself up (as I said the other day) to more metabolic clearance and more nonspecific tox, as your sticky compounds glop onto all sorts of things in vivo. (They'll be fun to formulate, too). Meanwhile, if you dip down too far into that really-polar left-hand side, crossing your fingers for membrane crossing, you can slide into the land of renal clearance, as the kidneys vacuum out your water-soluble wonder drug and give your customers very expensive urine.
But in general, you have more room to maneuver in the lower molecular weight range. The humungous compounds tend to not get through membranes at reasonable LogD values. And if you try to fix that by moving to higher LogD, they tend to get chewed up or do unexpectedly nasty things in tox. Stay low and stay happy.
+ TrackBacks (0) | Category: Drug Assays | Pharma 101 | Pharmacokinetics
November 19, 2009
I get regular requests to recommend books on various aspects of medicinal chemistry and drug development. And while I have a few things on my list, I'm sure that I'm missing many more. So I wanted to throw this out to the readership: what do you think are the best places to turn? This way I can be more sure of pointing people in the right directions.
I'm interested in hearing about things in several categories - best introductions and overviews of the field (for people just starting out), as well as the best one-stop references for specific aspects of drug discovery (PK, toxicology, formulations, prodrugs, animal models, patent issues, etc.)
Feel free to add your suggestions in the comments, or e-mail them to me. I'll assemble the highest-recommended volumes into a master list and post that. Just in time for the holidays, y'know. . .
+ TrackBacks (0) | Category: Life in the Drug Labs | Pharma 101
August 13, 2009
Why do we test drugs on animals, anyway? This question showed up in the comments section from a lay reader. It's definitely a fair thing to ask, and you'd expect that we in the business would have a good answer. So here it is: because for all we know about biochemistry, about physiology and about biology in general, living systems are still far too complex for us to model. We're more ignorant than we seem to be. The only way we can find out what will happen if we give a new compound to a living creature is to give it to some of them and watch carefully.
That sounds primitive, and I suppose it is. We don't do it in a primitive manner, though. We watch with all the tools of our trade - remote-control physiological radio transmitters, motion-sensing software hooked up to video cameras, sensitive mass spectrometry analysis of blood, of urine, and whatever else, painstaking microscopic inspection of tissue samples, whatever we can bring to bear. But in the end, it all comes down to dosing animals and waiting to see what happens. That principle hasn't changed in decades, just the technology we use to do it.
No isolated enzymes can yet serve as a model for what can happen in a single real cell. And no culture of cells can recapitulate what goes on in a real organism. The signaling, the feedback loops, the interconnectedness of these systems is (so far) too much for us to handle. We keep discovering new pathways all the time, things that no model would have included because we didn't even know that they were there. The end is not yet in sight, occasional newspaper headlines to the contrary.
We do use all those things as filters before a compound even sees its first rodent. In a target-driven approach, which is the great majority of the industry, if a compound doesn't work on an isolated protein, it doesn't go on to the cell assay. If it doesn't work on the cells, it doesn't go on to animals. (And if it kills cells, it most certainly doesn't go on to the animals, unless it's some blunderbuss oncology agent of the old school). The great majority of compounds made in this business have never been given to so much as one mouse, and never will.
So what are we looking for when we finally do dose animals? We're waiting to see if the compound has the effect we're hoping for, first off. Does it lower blood pressure, slow or stop the growth of tumors, or cure viral infections? Doing these things requires having sick animals, of course. But we also give the drug candidates to healthy ones, at higher doses and for longer periods of time, in order to see what else the compounds might do that we don't expect. Most of those effects are bad - I'd casually estimate 99% of the time, anyway - and many of them will stop a drug candidate from ever being developed. The more severe the toxic effect, the greater the chance that it's based on some fundamental mechanism that will be common to all animals. In some cases we can identify what's causing the trouble, once we've seen it, and once in a great while we can use that information to argue that we can keep going, that humans wouldn't be at the same risk. But this is very rare - we generally don't know enough to make a persuasive case. If your compound kills mice or kills rats, your compound is dead, too.
I've lost count of the number of compounds I've worked on that have been pulled due to toxicity concerns; suffice it to say that it's a very common thing. Every time it's been something different, and it's often not for any of the reasons I feared beforehand. I've often said here that if you don't hold your breath when your drug candidate goes into its first two-week tox testing, then you haven't been doing this stuff long enough.
Here's the problem: giving new chemicals to animals to see if they get sick (and making animals sick so that we can see if they get better) are not things that are directly compatible with trying to keep animals from suffering. Ideally, we would want to do neither of those things. Fortunately, several factors all line up in the same direction to keep things moving toward that.
For one thing, animal testing is quite expensive. Only human testing is costlier. In this case, ethical concerns and capitalist principles manage to line up very well indeed. Doing assays in vitro is almost invariably faster and cheaper, so whenever we can confidently replace a direct animal observation with an assay on a dish, plate, or chip, we do. All that equipment I mentioned above has also cut down on the number of animals needed, and that trend is expected to continue as our measurements become more sensitive.
So things are lined up in the right direction. Any company that found a reliable way to eliminate any significant part of its animal testing would immediately find itself in a better competitive position.
And for the existing tests, it's also fortunate that unhappy animals give poor data. We want to observe them under the most normal conditions possible, not with stress hormones running through their systems, and a great deal of time and trouble (and money) goes toward that end. (In this case, it's scientific principles that line up with ethical ones). Diseased animals are clearly going to be in worse shape than normal ones, but in these situation, too, we try to minimize all the other factors so we're getting as clear a read as possible on changes in the disease itself.
So that's my answer: we use animals because we have (as yet) no alternative. And our animal assays prove that to us over and over by surprising us with things we didn't know, and that we would have had no other opportunity to learn. We'd very much like to be able to do things differently, since "differently" would surely mean "faster and more cheaply". None of us enjoy it when our compounds sicken healthy animals, or have no effect on sick ones. Just the wasted time and effort alone is enough to make any drug discoverer think so. There are billions of dollars waiting to be picked up by anyone who finds a better way.
+ TrackBacks (0) | Category: Animal Testing | Pharma 101
February 6, 2009
I did something in the lab the other day that I hadn’t done in several years: run some preparative TLC plates. I had some small reactions that needed to be cleaned up, and the HPLC systems were all in use, so I thought “Why not?” (I wrote here about the decline of analytical TLC in general in some labs, and I think it's fair to say that the larger-scale prep version has seen an even steeper drop in use over the years).
Prep TLC, for those of you not in the business, is a pretty simple technique. You take a square glass plate that’s been coated with a dry layer of ground silica, a white slurry that for this application is about the grittiness of flour or ground sugar. You then take your mixture of gunk, dissolve it up in a small volume of solvent, and deposit it in a line across the bottom of the plate, an inch or so up from one side and parallel to it. Then you take a large glass container and add some solvent to the bottom of it, and put your plate in so that the streaked line of material is near the bottom. Here's one running.
The solvent soaks into the layer of silica, and after it gets up an inch or so it hits your line of stuff. As it continues to move up, soaking further and further up the glass plate, the different components of the mixture will be carried along at different rates. The compounds that stick to silica gel (for one reason or another) will lag behind, while the ones that don’t will move out into the lead. After an hour or so, the solvent line will be up near the top of the plate, and your mixture will now be spread out across it into a series of bands. (The TLC page at Wikipedia has some useful images of this). Up at the top, running with the solvent, will the the nonpolar stuff that didn’t have anything to slow it down. Right down near the bottom, not far up from your original streak, will be the most polar stuff, especially any basic amines – silica gel is mildly acidic, so the amines will stick to it very tightly indeed. And in between will be the other components, divided out according to how they balanced out the pull of the silica gel support with the attraction of the solvent moving them along. Sometimes you can see them as colored bands on the silica plate, but more often you shine a UV light on the whole plate to see them. The silica we use has an ingredient that makes it fluoresce green under ultraviolet, and our compounds usually show up as dark blue or purple bands against the green. It’s a color combination known to every working synthetic organic chemist.
You can see that picking different solvents for this process can change things a great deal. A weak solvent (like hexane) will allow almost everything to stick to the silica. (A compound has to be mighty greasy to be swept along by just hexane; I doubt if there’s a drug in the business that you’d be able to clean up that way). A standard mix is some proportion of ethyl acetate mixed with hexane. You can go up to straight ethyl acetate, or even further by mixing in methanol or the like. And if you’re desperate, you can go to most any solvent mixture you like – three-solvent brews, toluene, acetonitrile, acetone, whatever works.
So how do you get the things off? By the lowest-tech method you can imagine. You mark the position of the band (or bands) you want, and then take a metal spatula and scrape the silica there off the plate. You them dump that into a flask and stir it with a strong solvent, then filter off the silica and wash it some more to rinse your compound out.
This used to be much more of an everyday technique, but automated column chromatography (same principle, pumped through a tube) has taken over. But prep TLC still has its appeal. Done with skill, it can provide very clean compounds, with quite good recovery. In fact, its low cost and power have made it a favorite technique at places like WuXi, the outsourcing powerhouse in China. I've had several first-hand descriptions of their prep TLC room, with rows of plates being run, marked, and scraped in assembly-line fashion. It's the sort of thing you'd only do in a cheap-labor market, because of the unavoidable hand work involved, but it is effective.
I don't know where WuXi gets its plates, but if you make your own, it's an even cheaper technique (discounting labor costs, naturally). You take up the silica gel powder in water, make a thick, well-mixed slurry out of it, and spread it across a square of glass, shaking and tapping it to get the air bubbles out. Back when I was doing summer undergraduate work, I poured a number of these things, although it's certainly nothing I've had experience with since the first Reagan administration. For all I know, that's how WuXi does it now. Perhaps they've found a low-cost supplier of their own, but the idea of a cheap supplier for a Chinese outsourcing company is an interesting one all by itself. . .
+ TrackBacks (0) | Category: Life in the Drug Labs | Pharma 101
December 10, 2008
There’s a trick that every medicinal chemist learns very early, and continues to apply every time its feasible: take two parts of your compound, and tie them together into a ring.
The reason that works so well may not be immediately obvious if you’re not a medicinal chemist, so let me expand on them a bit. The first thing to know is that this method tends to work either really well or not at all – it’s a “death or glory” move. And that gives you a clue as to what’s going on. The idea is that the rotatable bonds in your molecule are, under normal conditions, doing just that: rotating. Any molecule the size of a normal drug has all kinds of possible shapes and rotational isomers, and room temperature is an energetic enough environment to populate a lot of them.
But there’s only one of them that’s the best for fitting into your drug target, most likely. So what are the odds? As your molecule approaches its binding pocket, there’s a complicated energetic dance going on. Different parts of your drug candidate will start interacting with the target (usually a protein), and that starts to tie down all that floppy rotation. The question is, does the gain resulting from these interactions cancel out the energetic price that has to be paid for them? Is there a pathway that leads to a favorable tight-binding situation, or is your molecule going to approach, flop around a bit, and dance away?
Several things are at work during that shall-we-dance period. The different conformations of your compound vary in energy, depending on how much its parts are starting to bang into each other, and how much you’re asking the bonds to twist around. The closer that desired drug-binding shape is to the shape your molecule wants to be in anyway, the better off you are, from that perspective. So tying back the molecule and making a ring in the structure does one thing immediately: it cuts down on the range of conformations it can take, in the same way that tying a rope between your ankles cuts down on your ability to dance. You’ve handcuffed your molecule, which would probably be cruel if they were sentient, but then, a lot of organic chemistry would be pretty unspeakable if molecules had feelings.
That’s why this method tends to be either a big winner or a big loser. If the preferred binding mode of your compound is close to the shape it takes when you tie it down, then you’ve suddenly zeroed in on just the thing you want, and the binding affinity is going to take a big leap. But if it’s not, well, you’ve now probably made it impossible for the thing to adopt the conformation it needs, and the binding affinity is going to take a big leap over a cliff.
There’s another effect to reducing the flexibility of your compound, and that has to do with entropy. All that favorable-interaction business is one component of the energy involved, namely the enthalpy, but entropy is the other. Loosely speaking, the more disordered a system, the higher its entropy. A floppy molecule, when it binds to a drug target, has to settle down into a much tighter fit, and entropically, that’s unfavorable. Energetically, you’re paying to do that. But if your molecule is already much less flexible, there’s not much of a toll as it fits into the pocket. If loss-of-floppiness is a bad thing, then don’t start out with so much of it.
So, how much do I and my medicinal chemistry colleagues think about this stuff, day to day? A fair amount, but there are parts of it that we probably don’t pay enough attention to. Entropy gets less respect from us than it deserves, I think. It’s easy to imagine molecules bumping into each other, sticking and unsticking, but the more nebulous change-in-disorder part of the equation is just as important. And it doesn’t just apply to our drug molecules – proteins get less disordered as they bind those molecules (or more disordered, in some cases), and those entropic changes can mean a lot, too.
I also mentioned molecules finding a pathway to binding, and that’s something that we don’t think about as much, either. We probably make things all the time that would be potent binders, if they just could get past some energetic hump and wedge themselves into place. But there are no crowbars available; our drug candidates have to be able to work their way in on their own. The can’t-get-there-from-here cases come back from the assays as inactive. The tendency is to imagine these in the binding site already, and to try to think of what could be going wrong in there – but it may be that they’d be fine, but that their structures won’t allow them to come in for a landing.
Picturing this accurately is very hard indeed. We have enough trouble with good representations of static pictures of our molecules bound to their targets, so making a movie of the process is a whole different story. Each frame is on a femtosecond scale – molecules flip around rather quickly – and every frame would have to be computed accurately (drug structure, protein structure, and the energetics of the whole system) for the resulting video clip to make sense. It’s been done, but not all that often, and we’re not good at it.
+ TrackBacks (0) | Category: In Silico | Pharma 101
August 30, 2007
I hope that in decades to come that our current drugs look as crude as I think they will. For all of our knowledge and all our equipment, we still don't have much of an idea of what we're doing around this industry, not compared to the sum of what there is to know.
Most of our drugs (by "most", I mean way over 95%) bind to proteins. And that's fine, as far as it goes, because proteins sure are important things. We love them because many of them have pockets and cavities that fit small molecules, of course, giving us a tremendous leg up. But it's not that we've figured out how to attack them reliably, though, when you consider that there are many entire classes that have never been successfully targeted (phosphatases, to pick an outstanding example).
Once you get out of the small-molecule-binding zone, you're out in the wild, wide open prairie of protein-protein interactions. So far, we can't really affect those with small molecules, not worth squat. It's a shame, because the number of potential targets goes up by orders of magnitude when you take these interactions into account - well, assuming that we figure out what these zillions of interactions are actually doing, which is quite another problem in itself. But they're doing something, that's for sure, and we'd love to be able to step in for our own purposes.
But protein-protein interactions are only the beginning. If you want to go upstream and alter protein production at the source, then you're going to be targeting protein-DNA and protein-RNA) interactions. The list of known drug-like molecules which can do that is pretty short, and the success rate has been pretty small (more on the reasons for that in another post). And this is another area where only small regions of interaction space have been mapped out and understood, so there's room to work in - if you can find a way to make things work.
Don't stop there, though. We really don't pay enough attention to carbohydrates in all their forms, but they've got some crucial roles, too. Contacts involving complex polysaccharides are key to immune function, and small molecules that can affect them are rare indeed. A whole landscape of inflammation targets is waiting for someone who can get a handle on this stuff. And I haven't even talked about lipids, because frankly, we don't understand a lot of what they're doing. Protein-lipid interactions have been targeted, but can be a hard row to hoe, since the small molecules that work tend to look awfully greasy themselves. But there may also be lipid-lipid interactions that no one has ever noticed, and how you'd target those therapeutically is a real stumper.
There are even more exotic combinations, but you get the idea. When you look at the whole medicinally active universe, it's clear that we've only done successful work in a few small parts of it. An interesting and rewarding time awaits those who can extend those holdings. . .
+ TrackBacks (0) | Category: Drug Development | Pharma 101
August 28, 2007
There are a lot of drug development issues that people outside the field (and beginning medicinal chemists) don't think about. A significant one that sounds trivial is how often your wonder drug is going to be taken.
Once a day is the standard, and it's generally what we shoot for unless there's some reason to associate the drug with meals, sleep/wake cycles, or the like. People can remember to take something once a day - well, they remember it better than most of the other dosing schedules, anyway. That's why you actually want your compounds to be metabolized and cleared - everything has to be ready for the next dose tomorrow.
If your compound has a long half-life in the body after dosing, you'll step on the tail end of the last dose and you can see gradual accumulation of the drug in plasma or other tissues. And that's almost always a bad thing, because eventually every drug in the world is going to do something that you don't want. All you have to do is get the concentration up too high for too long (and figuring out what's too high and what's too long is the one-sentence job description of a toxicologist). If you stairstep your way up with accumulating doses, you'll get there in the end.
Ah, you might say, then just take the drug every other day. Simple! Sorry. Every other day (or every three, or four) is a complete nightmare for patient compliance. People lose track, and doctors know it. You'd better have a really compelling reason to go ahead with a weird regiment like that, and if you do, someone's going to seize the chance to come into your market with a once-a-day as soon as they can find one. (The exceptions to this are drugs given in a clinic, like many courses of chemotherapy - but in those cases, someone else is keeping track).
How about more often than once a day (q.d., in the Latin lingo). Well, twice a day (b.i.d. can work if it's morning/night. Three times a day can go with meals, presumably, but people are going to get tired of seeing your pills. More than three times a day? There'd better be a reason, and it had better be good.
So don't be scared as you watch your compounds disappear after giving them to the animals. You want that. Just not too quickly, and not too slowly, either.
+ TrackBacks (0) | Category: Drug Development | Pharma 101 | Pharmacokinetics
July 15, 2007
Over at the entertaining culture-blog 2Blowhards, the comments to this post (on people who feel deficient in math ability) include a mention of proteomics, which prompted Michael Blowhard to say:
"Proteomics" -- even the word is scary. I wonder how people in the field are going to communicate the substance and importance of what they're up to to civilians ... A challenge, I guess."
A challenge that I'm willing to take up! It's not my exact field, of course, but close enough. I'm starting a new category for posts like this, when I (and the readership here, in the comments) try to explain some technical buzzword-laden area in language that intelligent non-scientists can profit from. So. . .proteomics.
The place to start, most likely, is where the word came from. It's a direct steal from "genomics", the study of genomes, which are the total DNA sequences of a species (or individuals of a species). Back a few years ago when the human genome was being sequenced for the first time (all the individual A T C G letters being read off), it became clear that the number of genes that humans carry around was very much on the low side of what most people expected. (The human genome, as we have it today, is a composite - the number of people in the world who have their complete genome read can be counted on one hand. That's going to change drastically in the years to come as the process gets cheaper, faster, and more useful).
The reason why people expected more genes relates to what a gene is: a stretch of DNA that's read off (transcribed) and turned into a specific protein. That's DNA's job; it's a set of coded instructions to make proteins. But, as it happens, we have a lot more different proteins than we have genes. Clearly, something more happens downstream of the DNA part of the process.
A lot of things happen, actually. Those first-made proteins get altered in all sorts of ways. The same protein can be folded into different shapes, for starters (we're just now recognizing how important a process this is in some diseases). Proteins can also be clipped into smaller ones by many different routes, and at any stage they'll be decorated with molecular tinsel like sugars and lipids and phosphates. All of those can totally change a protein's function. This gives you some idea of where all that diversity is coming from - and why sequencing the human genome, huge and necessary accomplishment though it was, was nowhere near the end of the story.
Proteins spend their time interacting with other proteins. If you think of a cell in your body as a large irregularly shaped bag, full of intricate (and somewhat squishy) 3-D jigsaw pieces which are constantly sluicing around assembling or sliding past each other, you'll have a pretty reasonable idea of what it's like in there. Any given cell will contain thousands upon thousands of different proteins, many of which are doing multiple jobs depending on the time and place. Proteomics is the attempt to understand which proteins are doing what, when, with whom, and why.
It hardly needs saying, but we're just at the very beginning of that study. We have some tools to track these interactions, and they're far better than anything people had twenty or thirty years ago, but they're still rather crude compared to what we need. Huge signaling networks get uncovered and extended, and are found to touch upon others for reasons that are unclear. All sorts of feedback loops and backup systems are sketched in, and many pathways have been missed (or, alternatively, assigned too much importance) because they only operate under certain special conditions that our assays may overemphasize or skip entirely.
This project is much harder than the deciphering of the genome, and will take much longer. But that's because it's much closer to the real-time workings of a living organism, which means that comprehension, when it comes, will be still more valuable. Really substantial sums are being spent on this stuff, along with serious brainpower and computing resources. Progress will be jerky, irregular, infuriating, and of very great interest indeed.
+ TrackBacks (0) | Category: Pharma 101