About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
February 29, 2008
I was mentioning chromatography last week, and I’ve been running several columns this week myself. I’m doing them the new-fangled-with-sufficient-funds way, which has been the standard in the drug industry for many years. You buy the columns of silica gel pre-packed and plug whatever size you need into a machine. Then you load your sample on, tell it what solvents you want to use, put in a rack of test tubes, hit the button and go do something else.
You can set the machine to collect all the solvent that comes off, or (if your compound absorbs ultraviolet light) to only collect when something UV-active starts coming out the other end. Twenty minutes or so later, you come back to a rack of fractions and a printed map showing which UV-active peaks are in which tubes. All this is very nice, and would have caused me to faint with desire if I’d seen it when I was in grad school – not that I could have, since plug-and-play systems like this weren’t on the market back then.
The standard way was (and is, in less well-funded environments) to slurry up your silica gel in solvent, pour that into a glass column, push the solvent through with a stream of air or nitrogen pressure from the top (usually holding it down by hand to keep things from getting out of control), loading your mixture, and eluting fractions into test tubes or Erlenmeyer flasks until you’re sure that your stuff is all out. I wouldn’t want to guess how many columns I’ve run by hand like that over my lab career, but it’s been several years since my last one, and I don’t see another one in my future, with any luck. (For those of you who want to see how it's done, and have twenty minutes to spare, the folks at MIT will tell you all about it).
It’s hard to mess up the automated systems, although you should never underestimate the ingenuity of the user base. But the hand-run columns can easily be loused up in all sorts of ways. Perhaps the most spectacular I’ve seen was when the guy across the hall from me in grad school, who I’ll call “Bob”, since that was his name, decided to run a big column using DMSO as the solvent.
Most of my chemistry readership will have just looked at the screen and said “He did what?” DMSO is a mighty odd choice for a chromatography column. It’s a strong, strong solvent, for one thing, and would mostly just be expected to dissolve everything and sweep it right out the other end. And it’s thick and viscous, too, compared to the solvents that reasonable people use, which means that it would be no fun to get it to come out that other end at a reasonable rate.
But that was Bob’s choice, and he was working on a bunch of nasty, insoluble stuff, so DMSO seemed like a good idea to him at the time. But he didn’t run his column as I’ve described above. He was of another school of setting up columns – apostates, if you ask me – which advocated packing the silica gel into the column dry and running solvent through it before loading the sample. (That always seemed to take longer and use up more solvent, as far as I could tell).
It was a particularly ill-suited method for running a big honking DMSO column. DMSO, as you’ve probably never had the chance to notice, has rather exothermic solvation behavior with silica. In non-PhD language, it gets very hot, very quickly, when it wets the dry powder. So when Bob started, against all odds and a lot of common sense, to force a big bolus of DMSO down his dry column, things shortly got out of hand. Next door, I heard a big “POW!” and ran over to see what it was this time.
There was Bob, staring with dismay at the remains of his column, which had cracked and spewed DMSO-soaked silica all over the bench. In retrospect, he’s lucky that it didn’t shrapnel all over the place. As it was, he had an awful mess to clean up. I never got around to asking him just what he was going to do with all the DMSO fractions he would have taken off the column - evaporating that stuff off is no joke, although it's another problem that yields to sufficient funding. But, then, if sufficient funds had been available back then, Bob never would have been running a column in DMSO in the first place. . .
+ TrackBacks (0) | Category: How Not to Do It
February 28, 2008
Science has coverage of a diagnostic test for the APOE gene that’s coming into the market. For about $400, you can find out what form of the protein you have. The problem is, the main thing this test is good for is telling you that you have a greater-than-average chance of developing Alzheimer’s disease, which raises the question of whether it’s good for anything at all.
Most of the people quoted in the article have their doubts, which I share. Since we really don’t have any decent therapies for Alzheimer’s, what’s the good of knowing that you’re at greater risk for it? The only exception I can think of is mentioned by law professor Henry Greely of Stanford: if you’re homozygous for APOE4, you’re about 15 times more likely to develop Alzheimer’s. That gets into the range where you might want to make some long-term plans. Still, yikes – think of getting those results back.
About 2% of the population could potentially open that envelope. A further 25% are heterozygous for the gene, which corresponds to maybe 3 times the usual risk. That combination of a large number of people with a smaller level of risk seems to me to put it in the “not worth it” category. The psychological distress would seem to outweigh any benefit. Personally, as someone who makes his living with his memory and his brain, I’d be horrified, and to no good end. (And I’m a pretty even-keeled person, as my wife, who does the worrying in the family, will testify). It’s to the point that there’s even been a study following up the psychological reaction to the news of the test. It didn’t show anything alarming, apparently, but the sample was from people with a family history of Alzheimer’s.
No, I think that I’d have to be at least twenty years older to consider taking such a test at all, and even then I’d only want to know if I turned out to be homozygous, which I suppose I could be. (My kids, being Arkansas-Iranian hybrids, have a decreased chance of being homozygous for much of anything). I was going to say that I’d also like to know if I turned out to have no APOE4 allele at all, but quickly realized that those stipulations would end up telling me my status no matter what.
Anyway, here’s hoping that in twenty years we have something more useful to offer to people in that position. And here’s hoping that Smart Genetics, the company that has licensed the test and is bringing it to market, handles it responsibly and resists the temptation to sell fear and uncertainly for a profit. But the article’s quote from the company’s CEO, Julian Awad, isn’t encouraging: “We saw there was a big growth" in genetic testing and believed "there was something there for adding value to what people wanted," he says. I’m still working out what that sentence might actually mean, but I’m not sure I like it. Perhaps it’s just my aversion to business-speak.
+ TrackBacks (0) | Category: Alzheimer's Disease
February 27, 2008
There’s an interesting analysis in the latest PLoS Medicine on the clinical effectiveness of four modern antidepressant drugs: Prozac (fluoxetine), Effexor (venlafaxine), the partially discontinued Serzone (nefazodone), and Paxil (paroxetine). The authors compared all the published placebo-controlled studies on these drugs, and further included all the regulatory filing data. (Update: not so! See below). The result made headlines all over the place yesterday, because one of the things they found was that these drugs hardly seem, compared to placebo, to do anything at all.
Here’s the odd part: that shouldn’t have been such a big surprise. It wasn’t surprising to the authors of the paper – in fact, they started with the belief that this would be the case, because that analysis has been done before. Their interest was in seeing if there was some difference between different populations of depressed patients – is there some group for which the drugs really show efficacy or not?
As it turns out, there is, but perhaps not for the reasons you’d think. The most severely depressed cohort do seem to show a statistically meaningful response, but that seems largely because the placebo group’s response goes down. That’s been the difficulty with antidepressant clinical trials forever: there is a huge placebo response. This isn’t news; people have been studying this effect and trying to figure out what it means (or figure out a way around it) for years.
So, what does this do to the whole popular culture around the SSRI drugs – you know, “Listening to Prozac”, “Prozac Nation”, all that sort of thing? In this case, popular culture probably has it wrong. These drugs are not magical happy pills, but “Placebo Nation” just doesn’t have the same ring to it. The whole subject is too tangled to make for a catchy title.
It makes sense, though, that this is the area of drug discovery where the biggest placebo effect would turn up – you’d have to think that for depressed patients, a big step would have to be the thought that something can actually affect their condition. It’s bound to help for them to believe (correctly) that their moods aren’t necessarily part of the drab fabric of the universe, but depend instead on the (changeable) chemical weather inside their brains. Knowing those things, and the act of taking a medication that is supposed to work, is enough to help between a quarter and a half of depressed patients right there.
The actual mechanism of the placebo effect is a field of great interest and potentially great importance. (See here, here, here, and here). News like this makes a person wonder, though: if large parts of the public become convinced that antidepressant drugs don’t work, will they? And the question remains: do the SSRI drugs do anything at all through their supposed chemical mechanisms? (It's not like we know). One way to find out would be to run a placebo versus placebo trial. You could blind things at the start, even though everyone was getting the same sugar pills, and you’d presumably see the same response in each group. Then you unblind and cross everyone over, telling people that they’d been in one group and were now headed to the other. Careful work would give you four study arms: (1) people who responded to placebo, and who were then told they’d been taking sugar but were now getting the real drug, (2) people who responded and were told that they were taking a real drug but were now being switched off of it, (3) people who didn’t respond, but were told that this was because they’d been taking sugar, but help was now on the way, and (4) people who didn’t respond, and were told that they’d been getting (apparently ineffective) drug, but were now coming off even that. Fascinating stuff, but we’re going to have to wait for the North Koreans to set it up for us, because no other regulatory agency would let it through.
But from this latest analysis, we can conclude something interesting. The fact that the placebo effect diminishes in the most severely depressed patients, but that the drugs continue to show the same level of efficacy, suggests that they do have some effects of their own. To me, that’s the real news from this study. It reminds me of G. K. Chesterton’s line about journalism being the business of saying “Lord Jones Is Dead” to people who never knew he was alive. In this case, the headlines have been “Antidepressants Don’t Work”, but that should have been the headline years ago. This one should have come in as “Antidepressants Might Actually Do Something”.
Update: A closer look, as suggested in the comments section, shows that the trials included in the meta-analysis were mostly quite short (six weeks or less), when a good deal of evidence would suggest that these drugs take longer to become truly worthwhile. And there is only one study on moderate depressed patients, making it hard to draw conclusions about that group. See the comments page on the article here for more criticisms. So, do antidepressants work or not? You can find an answer that fits, no matter what you need it to be. . .
+ TrackBacks (0) | Category: Clinical Trials | The Central Nervous System
February 26, 2008
In a comment to my post on putting out fires last week, one commenter mentioned the utility of the good old sand bucket, and wondered if there was anything that would go on to set the sand on fire. Thanks to a note from reader Robert L., I can report that there is indeed such a reagent: chlorine trifluoride.
I have not encountered this fine substance myself, but reading up on its properties immediately gives it a spot on my “no way, no how” list. Let's put it this way: during World War II, the Germans were very interested in using it in self-igniting flamethrowers, but found it too nasty to work with. It is apparently about the most vigorous fluorinating agent known, and is much more difficult to handle than fluorine gas. That’s one of those statements you don’t get to hear very often, and it should be enough to make any sensible chemist turn around smartly and head down the hall in the other direction.
The compound also a stronger oxidizing agent than oxygen itself, which also puts it into rare territory. That means that it can potentially go on to “burn” things that you would normally consider already burnt to hell and gone, and a practical consequence of that is that it’ll start roaring reactions with things like bricks and asbestos tile. It’s been used in the semiconductor industry to clean oxides off of surfaces, at which activity it no doubt excels.
There’s a report from the early 1950s (in this PDF) of a one-ton spill of the stuff. It burned its way through a foot of concrete floor and chewed up another meter of sand and gravel beneath, completing a day that I'm sure no one involved ever forgot. That process, I should add, would necessarily have been accompanied by copious amounts of horribly toxic and corrosive by-products: it’s bad enough when your reagent ignites wet sand, but the clouds of hot hydrofluoric acid are your special door prize if you’re foolhardy enough to hang around and watch the fireworks.
I’ll let the late John Clark describe the stuff, since he had first-hand experience in attempts to use it as rocket fuel. From his out-of-print classic Ignition! we have:
”It is, of course, extremely toxic, but that's the least of the problem. It is hypergolic with every known fuel, and so rapidly hypergolic that no ignition delay has ever been measured. It is also hypergolic with such things as cloth, wood, and test engineers, not to mention asbestos, sand, and water-with which it reacts explosively. It can be kept in some of the ordinary structural metals-steel, copper, aluminium, etc.-because of the formation of a thin film of insoluble metal fluoride which protects the bulk of the metal, just as the invisible coat of oxide on aluminium keeps it from burning up in the atmosphere. If, however, this coat is melted or scrubbed off, and has no chance to reform, the operator is confronted with the problem of coping with a metal-fluorine fire. For dealing with this situation, I have always recommended a good pair of running shoes.”
Sound advice, indeed. I'll be lacing mine up if anyone tries to bring the stuff into my lab.
+ TrackBacks (0) | Category: Things I Won't Work With
February 25, 2008
My piece on Merck last week seems to have touched a few nerves, if some of the comments and e-mails I’ve received are any sign. To clarify things: I agree that Merck is still doing some excellent science, as they always have. And they still have a lot of good people there, as they always have. Those aren’t the problems. And they’re still introducing some innovative drugs, arguably more than a lot of other companies, and that’s not the problem, either. These are all are admirable things.
And Vioxx, as I said here at the time, was not, in my opinion, necessarily a bad drug. It and the other COX-2 inhibitors have a real place in the pharmacopeia. The problem is that Merck – or, to put the usual face-saving perspective on it, Merck’s marketing department – oversold the stuff. The prospect of an aspirin-sized market was too much for them to resist, so the company pushed Vioxx just about as hard as they possibly could.
Yep, Vioxx was for all kinds of patients, all kinds of pain, all the time – and under those conditions, whatever side effects were there were finally revealed. It’s the company’s bad luck (not to mention the bad luck of their patients) that those effects were as potentially severe as they were. Even so, the increased risk of a heart attack with Vioxx use is extremely small in any absolute sense. For people with severe pain who can’t get relief with other drugs, I think a COX-2 inhibitor is absolutely worth it.
But that’s not what you’d think from reading the newspapers, or from listening to the lawyers. It was expedient to paint the company as a bunch of callous poisoners; Merck’s reputation has been hooked to the back of a pickup truck and pulled through a swamp. (They didn't always do themselves much good during that period, either). And while the good name was bouncing off the tree stumps and scooping up the mud, the company had to spend vast amounts of money to deal with all those lawsuits, which is money that presumably could have been used for something else. (OK, some of that is coming from insurance – but think of how much more they’ll be paying for that coverage now).
Which is what worries me about taranabant. I realize, as several commenters to the previous post pointed out, that it may well differ in selectivity and CB-1 receptor activity from rimonabant. If the compound is an inverse agonist instead of an antagonist at the receptor, that could well be good news. Or, you know, it might not be, since we have no idea of what an inverse agonist will do, either. (More on the difference between those terms in a future post). At any rate, discovering new things about human CNS functions while a bunch of lawyers watch doesn’t sound like a good idea. If Merck does end up going down the Vioxx path again, another run through the swamp will do it no good at all.
+ TrackBacks (0) | Category: Diabetes and Obesity | The Dark Side
February 21, 2008
Courtesy of Steve Ley’s group, here’s a lab trick I’d never come across before. They were trying to purify a nasty mixture of closely related isomers, and found that the best chromatographic separation came from a long, long, run in ether/hexane. I’ve been in that situation myself, but it’s hard to have the patience to run a large column for such a long time, and it’s even harder to evaporate down the ridiculous amounts of solvent that you generate. (Even experienced organic chemists tend to underestimate how long that last part can take).
Ley’s group hit on an interesting solution. They loaded the crude material from a 42-gram reaction onto silica gel, and hooked a water-cooled condenser up to the top of the column. Under the condenser was a one-liter flask of 1:1 ether/pentane, heated to reflux. Those two solvents form an azeotropic mixture (about 1:1) that happens to match up well with the solvent brew needed for the column. This way, fresh solvent was continuously dripping down through the column, which was rigged to elute back into the flask of boiling solvent.
Chemists will recognize this as a variation of the Soxhlet extraction, and a rather ingenious one. To switch fractions, you turn off the heat, pour out the 1-liter flask, and charge it up with fresh pentane and ether. The solvents are so low-boiling that the material coming off the column doesn’t decompose while it’s cooking around in there in between. With one kilo of silica gel, they ran the column at about 80 mL per minute, and cut fractions about every 7 hours. (Told you it was a slow column!). After five days of this, they’d separated out their isomers. That took them out to 19 fractions, which seemed to be enough, but it turned out that washing the column with acetone furnished a pretty good amount of the final (most polar) component (which was presumably coming out very dilute by that point).
They used about 17 liters of solvent, which is a fair amount of rota-vapping, but is nothing compared to the 590 liters that would have been used under normal column conditions. (No one would have been able to put up with that). This idea will probably always have limited application – there are only so many solvents (or solvent mixtures) that can be used, for one thing. And in many cases people will grit their teeth and turn to large-scale HPLC when it’s available. (That’ll use more solvent than this, but less than an old-fashioned column, in most cases). But if someone had thought of this technique back in, say, 1955, it would have been everywhere.
And it could still be especially useful in academic labs, where labor is cheaper than solvent, and worth considering elsewhere. I’m always glad to see something new constructed out of the sort of equipment that’s in the drawers of every lab bench.
+ TrackBacks (0) | Category: Chemical News
February 20, 2008
A recent item from InVivoBlog about Merck which brought up some interesting points. They aren’t cheerful ones. The article is largely about Merck’s reputation, which has taken some dents in recent years, to put it lightly. The Vioxx debacle is the main reason for this, but the hits have kept on coming, such as the latest controversy over the release of the disappointing Vytorin study data.
So, although this is a painful question, perhaps it needs to be asked: remember when Merck was above all that stuff? Maybe there should be a “seemed” in that sentence somewhere; that might take some of the sting away. But the company really did have a singular reputation at one time. Depending on your point of view, you could have used words like “insular” or “arrogant” to describe the culture over there, but they were distinctive.
Merck didn’t merge with anyone. They stuck with targets and projects for years and years if they thought something would come out of them. And (until Vioxx) they avoided the sorts of disasters that seemed to hit other companies. That’s gone. Not all gone – they still seem to run on longer timelines over there – but one of the most distinctive things about the company was how it guarded its reputation, and that seems to have slipped down the list. They didn't have to do ad campaigns like this one. The company's trying to convince people, or convince themselves, that things haven't changed, but they're wrong.
The other thing that struck me about the article was about the development of the company’s CB-1 antagonist. That’s the same mechanism as rimonabant, Sanofi-Aventis’s failed wonder drug for obesity. (OK, it’s on the market as Acomplia in several countries, but considering what people had thought it would do, it’s a failure, all right). I question Merck’s judgment in pushing another compound into that area, although these programs do take on a life of their own. And as the In Vivo post points out, Merck’s current reputation of pushing every drug as hard as possible won’t help it when it comes to getting the drug through the FDA.
The biggest problem with rimonabant was the comparison of its side effects to its efficacy. It does seem to help people lose weight, although not to any startling extent, but in a large patient population various psychiatric side effects showed up. Taranabant's side effect profile isn't yet clear. Merck is going to have to tread lightly, but can they? The situation is a bit too much like Vioxx, with a huge, lucrative market out there if you can just expand the patient population. And we can argue about just how bad Vioxx really was, and about its risk/benefit ratio, but that won't change the fact that it was a catastrophe for Merck. The last thing they need is another one. I don't think I would have picked this time to push another CB-1 antagonist forward, but I suppose we don't get to pick that sort of thing. . .
+ TrackBacks (0) | Category: Diabetes and Obesity | Drug Development | Drug Industry History | The Dark Side
February 19, 2008
No post today - I'm taking an extra day to the long weekend, and going in with my kids to see the lizards at the Museum of Science in Boston. Just wanted to let everyone know that nothing is down, and things will be back to their usual levels tomorrow!
+ TrackBacks (0) | Category: Blog Housekeeping
February 15, 2008
Lab fires don’t happen as often as you might think, at least to hear the way organic chemists talk. We all have alarming stories of alarming reactions (often set up by some rather alarming labmates), but these things are harvested over a fairly broad range of experience. It’s a familiar enough topic that I can remember someone sitting down at lunch while we were swapping lab stories and saying “Oh, this conversation. . .”
But happen they do, and it’s always worth taking a couple of minutes to think about what you do in such a situation. That depends on the fire, of course. For starters, a small one burning out of the neck of a flask can be put out quickly just by slapping a beaker over the top of it. Never neglect that possibility, because it’s fast, effective, and (truth be told) if no one saw you do it, no one necessarily has to know that your (minor!) fire even happened.
Larger ones aren’t going to be so easy, but there are some potential ways out of those, too. My wife had a labmate in her molecular biology department who was always setting off blazes with the ethanol she used to wipe things down with. (This person neglected to turn off the Fisher burner used for sterilizing wire loops, etc., before she started sloshing the alcohol around). A fire like that will just burn itself out if you close the hood sash and let it rip for a few seconds, as long as you’re sure that there’s no fuel source (like the wash bottle of ethanol you might have chucked in there in a moment of panic, for example).
Most chemistry hoods, though, have all too many sources of fuel in them, so you probably won’t be able to put out a blaze through benign neglect. If it comes to a fire extinguisher, make sure you already know where the nearest one is, for starters. You'd be surprised how hard it is to find one of the darn things when you really need it. And once you've found it, make sure that you know which kind you’re using. The carbon dioxide ones don’t make the horrible mess that the dry-chem ones do, which is one thing in their favor, although I think in general they’re a bit less effective. You can tell the difference immediately – the carbon dioxide ones have the big nozzle on them, while dry-chem is a short, plain hose. My lab is outfitted with the latter, which makes me wish more than ever that we never have to use them.
And if you happen to have halon extinguishers (are those still around?), make a note of that, because the technique you may have learned for using the other ones won’t work. Instead of coming in and aiming at the base of the fire, with halon you have to stand further back and let the stuff shower down on it. A colleague of mine once blew the contents of a flaming oil bath all over the lab because he hadn’t been trained in that distinction.
The safety people always tell you that if you’ve used up one extinguisher and the fire still isn’t out, to head for the door rather than reach for a second one. That’s probably good advice (although I’ve seen it disregarded), and I’d advise you to take it. Actually, I’d advise you never to have that decision to make at all, but that’s not always up to you. You may be doing nothing but adding sodium sulfate to a bunch of dichloromethane today, but who knows? The guys next door might be gearing up for Trimethylaluminum Fiesta Days. You never can tell.
+ TrackBacks (0) | Category: Life in the Drug Labs
February 14, 2008
I’ve been reading an interesting paper from JACS with the catchy title of “Optimization of Activity-Based Probes for Proteomic Profiling of Histone Deacetylase Complexes”. This is work from Benjamin Cravatt's lab at Scripps, and it says something about me, I suppose, that I found that title of such interest that I immediately printed off a copy to study more closely. Now I’ll see if I can interest anyone who wasn’t already intruiged! First off, some discussion of protein tagging, so if you’re into that stuff already, you may want to skip ahead.
So, let’s say you have a molecule that has some interesting biological effect, but you’re not sure how it works. You have suspicions that it’s binding to some protein and altering its effects (always a good guess), but which protein? Protein folks love fluorescent assays, so if you could hang some fluorescent molecule off one end of yours, perhaps you could start the hunt: expose your cells to the tagged molecule, break them open, look for the proteins that glow. There are complications, though. You’d have to staple the fluorescent part on in a way that didn’t totally mess up that biological activity you care about, which isn’t always easy (or even possible). The fact that most of the good fluorescent tags are rather large and ugly doesn’t help. But there’s more trouble: even if you manage to do that, what’s to keep your molecule from drifting right back off of the protein while you’re cleaning things up for a look at the system? Odds are it will, unless it has a really amazing binding constant, and that’s not the way to bet.
One way around that problem is sticking yet another appendage on to the molecule, a so-called photoaffinity label. These groups turn into highly reactive species on exposure to particular wavelengths of light, ready to form a bond with the first thing they see. If your molecule is carrying one when it’s bound to your mystery protein, shining light on the system will likely cause a permanent bond to form between the two. Then you can do all your purifications and separations, and look at your leisure for which proteins fluoresce.
This is “activity-based protein profiling”, and it’s a hot field. There are a lot of different photoaffinity labels, and a lot of ways to attach them, and likewise with the fluorescent groups. The big problem, as mentioned above, is that it’s very hard to get both of those on your molecule of interest and still keep its biological activity – that’s an awful lot of tinsel to carry around. One slick solution is to use a small placeholder for the big fluorescent part. This, ideally, would be some little group that will hide out innocently during the whole protein-binding and photoaffinity-labeling steps, then react with a suitably decorated fluorescent partner once everything’s in place. This assembles your glowing tag after the fact.
A favorite way to do that step is through an azide-acetylene cycloaddition reaction, the favorite of Barry Sharpless’s “click” reactions. Acetylenes are small and relatively unreactive, and at the end of the process, after you’ve lysed the cells and released all their proteins, you can flood your system with azide-substituted fluorescent reagent. The two groups react irreversibly under mild catalytic conditions to make a triazole ring linker, which is a nearly ideal solution that’s getting a lot of use these days (more on this another day).
So, now to this paper. What this group did was label a known compound (from Ron Breslow's group at Columbia) that targets histone deacetylase (HDAC) enzymes, SAHA, now on the market as Vorinostat. There are a lot of different subtypes of HDAC, and they do a lot of important but obscure things that haven’t been worked out yet. It’s a good field to discover protein function in.
When they modified SAHA in just the way described above, with an acetylene and a photoaffinity group, it maintained its activity on the known enzymes, so things looked good. They then exposed it to cell lysate, the whole protein soup, and found that while it did label HDAC enzymes, it seemed to label a lot of other things in the background. That kind of nonspecific activity can kill an assay, but they tried the label out on living cells anyway, just to see what would happen.
Very much to their surprise, that experiment led to much cleaner and more specific labeling of HDACs. The living system was much nicer than the surrogate, which (believe me) is not how things generally go. Some HDACs were labeled much more than others, though, and my first thought on reading that was “Well, yeah, sure, your molecule is a more potent binder to some of them”.
But that wasn’t the case, either. When they profiled their probe molecule’s activity versus a panel of HDAC enzymes, they did indeed find different levels of binding – but those didn’t match up with which ones were labeled more in the cells. (One explanation might be that the photoaffinity label found some of the proteins easier to react with than others, perhaps due to what was nearby in each case when the reactive species formed).
Their next step was to make a series of modified SAHA scaffolds and rig them up with the whole probe apparatus. Exposing these to cell lysate showed that many of them performed fine, labeling HDAC subtypes as they should, and with different selectivities than the original. But when they put these into cells, none of them worked as well as the plain SAHA probe – again, rather to their surprise. (A lot of work went into making and profiling those variations, so I suspect that this wasn’t exactly the result the team had hoped for - my sympathies to Cravatt and especially to his co-author Cleo Salisbury). The paper sums the situation up dryly: "These results demonstrate that in vitro labeling is not necessarily predictive of in situ labeling for activity-based protein profiling probes".
And that matches up perfectly with my own prejudices, so it must be right. I've come to think, over the years, that the way to go is to run your ideas against the most complex system you think that they can stand up to - in fact, maybe one step beyond that, because you may have underestimated them. A strict reductionist might have stopped after the cell lysate experiments in this case - clearly, this probe was too nonspecific, no need to waste time on the real system, eh? But the real system, the living cell, is real in complex ways that we don't understand well at all, and that makes this inference invalid.
The same goes for medicinal chemistry and drug development. If you say "in vitro", I say "whole cells". If you've got it working in cells, I'll call for mice. Then I'll see your mice and raise you some dogs. Get your compounds as close to reality as you can before you pass judgment on them.
+ TrackBacks (0) | Category: Biological News | Drug Assays | Drug Development
February 13, 2008
I’m going to be working up an Arbuzov reaction this morning, which is an odd thing for me to say. That’s because to the best of my recollection, which is pretty good, I’ve only run any of those during one period in my lab career. That was back in grad school, along about 1985, I’d say. I hope this one proves more useful than that one did – I was trying to make some dimethyl diazomethylphosphonate, and the prep was a relentless barrage of No Fun. (The first part of the sequence was identical to this).
I keep a list in my head of songs that I’ve only heard one time (no, I don’t appear to be normal, thanks for asking), and perhaps it’s time for me to assemble a list of reactions that I’ve only run once. That’s a tougher one, because if a reaction fails, you may well run the thing again. Still, I’ve only done one hydrogenation at 2000 psi with rhodium on alumina (July 3, 1984, and it looked like used lawnmower oil afterwards, I should add), and I’ve only used samarium iodide one time (and it didn’t work). But for a longer list I might have to settle for some things that I ran for a brief period and never have since.
The Claisen rearrangement would fall into that class, for sure. A feature of my early grad school work, I’ve never had the need to run one since. I can't think of the reaction without smelling ethyl vinyl ether in my memory, which is not a feature, in case you're wondering. I did a lot of carbohydrate reactions back then that I haven’t had the need to return to, either – Ferrier rearrangements being just one of them. And, like many other chemists, I had a brief photochemistry period, in my case during my post-doc, and have never run one of those again, either. Others that enjoyed their day in the sun and have never been seen again in my hood are the Prins reaction, nitrone cycloaddition (not since I was an undergrad in 1983), Lindlar hydrogenation, and the Henry reaction.
The thing is, any of these could make a comeback at any time. They’re still all perfectly reasonable reactions, and depending on what comes out of the next high-throughput screen or literature search, I might be setting one up next week. You never know. But there are some reactions that I think I’ve said goodbye to forever. In some cases, that’s because better alternatives are now available - I mentioned here that I haven’t used PCC for oxidations in years, and I think that one’s been pretty much superseded.
Others are history because I either very much doubt I’ll have the need for them, or because I just flat out Don’ Wanna. For example, I made Dess-Martin periodinane three times on a hundred-gram scale, during a period in the early 1990s when it wasn’t commercially available, and I plan, with any luck, never to do that again. The prep has been improved since those days, but that explosive intermediate was never something I enjoyed seeing. I don’t think I’ll be synthesizing fluorosulfonic acid starting from hydrofluoric acid any time soon, either. I did that one as an undergrad, too, if you can believe that – this guy must have had confidence in me, which I’m not at all sure was warranted by the evidence at hand. Nor do I foresee any need to make Fremy’s Salt from scratch again. (You can see someone else do it here, though - the internet amazes me sometimes). And if I never do another reaction that requires half a mole of phosgene, that’ll be fine with me, too.
+ TrackBacks (0) | Category: Life in the Drug Labs
February 12, 2008
Manipulating nanoscale objects is a very hot research area these days, but no one’s quite sure whether it should be called physics or chemistry. The single-atom stuff (like the famous 1989 spelling of I-B-M using an early scanning tunneling microscope tip) would probably be the former, while moving whole molecules around would probably be the latter.
Now we’re to the point where you might consider it biology, since several recent papers describe ingenious uses of DNA as nanoscale pliers and Velcro. A report in Science from a group in Munich, demonstrates a nanoscale depot on a chip, formed by short DNA strands bound to its surface. Various molecules are tagged with complementary single strands of DNA. When you bring the two close enough, they hybridize, winding together spontaneously into a small double helix, which Velcros each molecule down to a defined position.
The second key to the work is that each of the molecules has a second, different DNA strand bonded to its other side. This one is complementary to a single strand attached to the tip of an atomic force microscope, so when that moves in close enough, those two hybridize as well. For the moment, the target is bound front and back.
But here's the trick: the two DNA helices are engineered so that the double helix on the bottom opens base-by-base, like a zipper, while the one on the AFM tip shears off all at once. That gives them different strengths, so when you pull up on the AFM tip, you can see the force profile of the "zipper" strand giving way as the attached molecule pulls free. Now it's dangling from the tip of the AFM, its newly freed DNA strand waving in the, uh, nano-breeze, I guess.
This was now moved to another portion of the chip, where more DNA strands awaited. These, like the tip strands, where also in the stonger "shear" geometry, but these were even longer, with more residues to wrap up with that free DNA strand on the molecule of interest. Lowering the two into proximity caused them to hybridize, and now pulling up on the tip caused the tip strand to unwind instead, leaving the molecule stuck on the new location on the chip. The AFM tip could then be sent back to the depot to pick up another molecule, and so on. (The illustration, courtesy of Science for nonprofit use, will give you the idea). The fluorescent molecules they used could then be imaged on the chip, confirming that they'd been arranged as expected.
The whole process took care, as you can imagine. The team kept the number of DNA strands on the tip quite low, in order to have a better idea of what was going on. Under their conditions, about one-third of the time, they picked up just one unit from the “warehouse”, and another twenty per cent of the time they got two at once. In the dropoff step at the new location, they sometimes noticed that no extra force was needed to pull the tip up, which indicated that they hadn't make a connection. In those cases, a shift of the tip assembly a few nanometers one way or another generally brought things within range for a successful transfer. It's not like you can see what's going on - light itself doesn't come small enough to let you do that in the normal sense - so you just have to feel your way along.
This is an early proof of concept, so it's not like we're going to be assembling nanomachines next week through this technique. (The DNA tags, for one thing, are rather large compared to the molecules that they're attached to). But the idea is there, and the idea works. We're starting to move single molecules around to where we want them to go, and making them stay put once they've been delivered.
+ TrackBacks (0) | Category: Chemical News | General Scientific News
February 11, 2008
One of the first projects I ever worked on when I started in industry was targeting Alzheimer's disease. Things could have easily worked out to find me still targeting Alzheimer's disease, nearly twenty years later, because the standard of care really hasn't advanced all that much in the intervening years.
It's a hard, hard area to work in. CNS programs are always difficult, since we understand less about the brain's workings than those of any other organ, and since the brain's own blood supply is another barrier to getting a drug through to do anything. And Alzheimer's has tough features on top of that, since (for one thing) we're the only animal that gets the disease, and (for another) the clinical trials needed to show efficacy can be hideously long, large, and expensive. And the underlying biochemistry has been a tangle, too: I've said for years that if you'd told me back in 1990 that people would still be arguing in 1999 (or 2002, or 2007. . .) about whether amyloid caused Alzheimer's or not, that I probably would have buried my head in my hands.
Well, it's 2008, and the arguments may finally get settled. There's a report in Nature from a group at Harvard who did an experiment that's simultaneously brute-force and elegant. The elegant part was the monitoring live brain cells in mutant mice as amyloid protein deposited among them - and the brute force part was that this monitoring involved surgically implanting a small window into their skulls to do it.
What they found was that the characteristic amyloid plaques of Alzheimer's can form startlingly quickly - on a time scale of hours. This is beyond what anyone had suspected, for sure. And the further pathologies (microglia, etc.) that form around the plaques definitely come later, settling a long-standing dispute. There's always the worry that the mouse model (which was engineered to develop amyloid within the brain) might not reflect the human disease, but this is pretty compelling (and alarming) stuff.
If this is even close to what's going on in humans, a therapy that tries to prevent amyloid formation or deposition is going to have some real work to do. We'll be finding that out, though, and good luck to everyone involved. . .
+ TrackBacks (0) | Category: Alzheimer's Disease | The Central Nervous System
February 8, 2008
There’s an excellent article in Nature Reviews Drug Discovery that summarizes the state of the HDL-raising drug world. It will also serve as an illustration, which can be repeated across therapeutic areas, of What We Don’t Know, and How Much We Don’t Know It.
The last big event in this drug space was the catastrophic failure of Pfizer’s torcetrapib, which wiped out deep into Phase III, taking a number of test patients and an ungodly amount of money with it. Ever since then, people have been frantically trying to figure out how this could have happened, and whether it means that the other drug candidates in this area are similarly doomed. There’s always the chance that this was a compound-specific effect, but we won’t know until we see the clinical results from those others. Until that day, if you want to know about HDL therapies, read this review.
I’d guess that if you asked a thousand random people about that Pfizer drug, most wouldn’t have heard about it, the same as with most other scientific news. But many that had might well have thought it was a cholesterol-lowering drug. Cholesterol = bad; if there’s one thing that the medical establishment has managed to get into everyone’s head, that’s it. The next layer of complexity (two kinds of cholesterol, one good, one bad) has penetrated pretty well, but not as thoroughly. A small handful of our random sample might have known, though, that torcetrapib was designed to raise HDL (“good cholesterol”).
And that’s about where knowledge of this field stops among the general population, and I can understand why, because it gets pretty ferocious after that point. As with everything else in living systems, the closer you look, the more you see. There are, for starters, several subforms of HDL, the main alpha fraction and at least three others. And there are at least four types of alpha. At least sixteen lipoproteins, enzymes, and other proteins are distributed in various ratios among all of them. We know enough to say that these different HDL particles vary in size, shape, cholesterol content, origin, distribution, and function, but we don’t know anywhere near as much as we need to about the details. There’s some evidence that instead of raising HDL across the board, what you want to do is raise alpha-1 while lowering alpha-2 and alpha-3, but we don’t really know how to do that.
How does HDL, or its beneficial fraction(s) help against atherosclerosis? We’re not completely sure about that, either. One of the main mechanisms is probably reverse cholesterol transport (RCT), the process of actually removing cholesterol from the arterial plaques and sending it to the liver for disposal. It’s a compelling story, currently thought to consist of eight separate steps involving four organ systems and at least six different enzymes. The benefits (or risks) of picking one of those versus the others for intervention are unknown. For most of those steps, we don’t have anything that can selectively affect them yet anyway, so it’s going to take a while to unravel things. Torcetrapib and the other CETP inhibitors represent a very large (and very risky) bet on what is approximately step four.
And HDL does more than reverse cholesterol transport. It also prevents platelets from aggregating and monocytes from adhering to artery walls, and it has anti-inflammatory, anti-thrombotic, and anti-oxidant effects. The stepwise mechanisms for these are not well understood, their details versus all those HDL subtypes are only beginning to be worked out, and their relative importance in HDL’s beneficial effects are unknown.
At this point, the review article begins a section titled “Further Complications”. I’ll spare you the details, but just point out that these involve the different HDL profiles (and potentially different effects) of people with diabetes, high blood pressure, and existing cardiovascular disease. If you’re thinking “But that’s exactly the patient population most in medical need”, you are correct. And if it’s occurred to you that this could mean that an HDL drug candidate’s safety profile might be even more uncertain than usual, since you won’t see these mechanisms kick in until you get deep into the clinical trials, right again. (And if you thought of that and you don’t already work in the industry, please consider coming on down and helping us out).
Much of the rest of the article is a discussion of what might have gone wrong with torcetrapib, and suffice it to say that there are many possibilities. The phrases “conflicting findings”, “remain to be elucidated”, “would be important to understand” and “will require careful analysis” feature prominently, as they damn well should. As I said at the time, we’re going to learn a lot about human lipidology from its failure, but it sure is a very painful way to learn it.
And that is the state of the art. This is exactly what the cutting edge of medical knowledge and drug discovery looks like, except for the fact that cardiovascular disease is relative well worked out compared to some of the other therapeutic areas. (Try central nervous system diseases if you want to see some real black boxes). This is what we’re up against. And if anyone wants to know how come we don’t have a good therapy yet for Disease A or Syndrome B. . .well, this is why.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Drug Development | Toxicology
February 7, 2008
A couple of years ago, I wrote about electronic lab notebooks, and pointed out how much better they've made my record-keeping. My new job also uses an electronic platform, to my relief, and if anything it's better implemented than the one I was using before. It's clear to me, that software lab notebooks are the only way to go. Drawing the structures, setting up duplicate or related experiments, attaching all the data files from LC/MS and NMR, the ease of retrieval for patent filing purposes, the ability to search structures across a whole organization's experience - there's no substitute. (One thing they don't handle well, though is TLC data, which I was just talking about - anyone have a solution for that?) But that aside, going back to paper would be agonizing; a directive to use hardbound notebooks would induce terror and dismay.
Still, both of the electronic notebooks I've used are in-house jobs. I've had some mail wondering if I have any recommendations among the commercially available software, and that's a question I can't help out with at all. So I thought I'd throw this one out to the readership: what's worked for you? And how much did it cost? Is there anything open-source that'll do the job? (I've heard of Wetlab and OS-ELN, but know nothing more about them).
And here's another question, which is more of a poll. Are you using paper or pixels for your notebook? If you answer in the comments, which I'm glad to report seem to be working again, mention what kind of work you do and if it's in an academic or industrial setting. I'm curious to see if the expected correlations show up. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
February 6, 2008
You know you’re getting older when techniques that you used to use constantly are in danger of becoming lost arts. The one that I’m thinking about today is thin-layer chromatography, or TLC. This is a classic lab method, taught to generations of undergraduates and used by untold hordes of working organic chemists. And it’s slowly on the way out.
Before we go into what’s killing it, a brief bit of background for the non-chemists in the crowd. To do a TLC, you take a plate of glass (or something else stiff, like thick aluminum foil) that’s been coated with a thin layer of some finely ground solid. The usual choice is silica gel, which is basically very pure, very finely ground sand. In its powdered state, it resembles a slightly grittier form of corn starch. In the old days, you’d spread this stuff out on the plates yourself, but it’s been twenty years since I saw anyone do that. During my whole career, you’ve been able to buy them premade, in all sorts of variations.
Then you take a drop of your mixture and put it on the silica layer, down near the bottom of the plate. Once it dries, you stand the thing up in a beaker or jar that has some solvent in the bottom - the idea is to wet the plate at the bottom, but not so far up that it rinses off your spot. As the silica gel layer wets, the solvent creeps up the plate. And (as in all the other forms of chromatography), the various compounds in your mixture will travel faster or slower, depending on their interactions with the silica versus with the solvent. A strong polar solvent (methanol) will tend to whip everything up with the solvent front, and a wimpy one (hexane) will tend to leave everything back at the start. Adjusting the solvent mixture can give you a spread of spots up and down the plate once you've let it run for a bit, and you can see those with a UV lamp, or by dipping the plate in some reagent that will generate colored material from your compounds. Excellent pictures of the process can be found here.
TLC is cheap, fast, convenient, and can be run in untold different variations. So what's killing it? Something even faster, more convenient, and more powerful: liquid chromatography/mass spectrometry. That was just barely possible when I was in grad school, and was expensive and tricky when I was in my early years in the industry. But now the machines, while still not cheap, are everywhere, and they're used in walkup mode. Just enter your data - or link it over from your electronic lab notebook - put your sample vial in the rack, and go away. What you get back is a better separation than TLC can give you, and every peak/spot now can be checked for the masses of the compounds in it. You can ask all the possible questions, such as "which peak has the mass I'm looking for?", or "What the heck is the main mass in that peak, anyway?". The mass spec gives you more information than you can deal with, and it's all stored digitally for your later perusal and second thoughts.
This trend has been coming on for years now, but it's reached a very noticeable point. Even a comparatively old-school guy like me hardly runs TLC plates any more. Once in a while, I'll need to, but mostly, it's just "throw it on the LC/MS". And I get the impression that people coming through grad school now are losing the finer points of TLC completely. And why not? They've never had to worry about them at all. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
February 5, 2008
Commenting appears to still be hosed around here, which is a shame, because I have some ask-the-readership posts stacked up. Writing posts under these conditions feels like shouting into a void! I hope things will be fixed soon, but it's quite a tangle behind the scenes.
Time is short today, at any rate, so here's a link to an image that I found simultaneously exciting and unnerving. There's a large project going on to make the world's best electron microscope, through several simultaneous improvements in the electron beam's shape and brightness, refinement of the detectors, damping vibrations in the sample stage, and so on.
So here's the latest. Those are two gold crystalline domains meeting each other at the corner - and those ping-pong balls are the gold atoms. You can clearly see them arranging to meet each other's packing structure at the interface, and if you look to the edges you can see some depth data as well. Those resolutions (well below one angstrom) are real, by the way, and the damn instrument is only about half done.
The group reports that when they scan sample multiple times, they can see individual gold atoms moving around between images. The next steps will include moving to lower-energy electrons for use in biological samples, and I can't even guess what we'll see then. More on the project here.
+ TrackBacks (0) | Category: General Scientific News
February 4, 2008
The topic of “me too” drugs has come up quite a bit around here over the years. For the most part, I’m a defender, although there are some places I draw the line (Clarinex for Claritin comes to mind as a particularly useless advance). A reader pointed out the amount of advertising he’s seeing for Aciphex (rabeprazole), another proton pump inhibitor for gastrointestinal reflux and general stomach acid problems, and wonders what side of the line this one is on.
I’m already on record as wondering just how much of an advance Nexium (esomeprazole) was over its racemic form, Prilosec (omeprazole), so the bar is set pretty high in this area already. Looking through PubMed, I find numerous comparisons between the drugs, which suggest some small differences in PK. Some earlier studies suggested that rabeprazole works more quickly over a course of therapy, but this is in dispute. There do seem to be differences in drug interactions between the various drugs in this category, which could be important in older patients who are already taking other medications. (Protonix (pantoprazole) may also be in this category). Perhaps reflecting this, this study found that rabeprazole was "significantly more effective" in elderly patients.
So, overall, there do seem to be some differences between the various drugs in this category (summarized here, among other places). In most cases, though, they're pretty much the same. This looks like a fight among near-equals, with the occasional tiebreaker going to one compound or another. This would explain why the ads for the various compounds are pretty interchangeable as well, featuring people holding their stomachs when confronted with a plate of barbecue. (Living in the Boston area, I can understand that reaction to some of the local stuff, but that's another topic).
As I say, I generally defend the idea of several drugs entering the same therapeutic space. In theory, and for the most part in practice, each new entrant provides something that the previous ones didn’t, and can thus carve out a space for itself. In this case, though, the differences between these drugs, though real, are comparatively small and subtle. There are patients who benefit from the number of choices in this category, but not as many as the advertising dollars would suggest. The whole proton-pump inhibitor market seems to be a fight among near-equals to carve up a large and lucrative market. The parallels that occur to me are the markets for SUVs and laundry detergent.
That means that it's a fight made for the big companies, for one thing. A smaller outfit would have been crazy to get into the PPI arena without a big partner. Given the current state of the art, it would seem crazy for anyone else to be contemplating an entry at all, unless they have some huge advance that they can demonstrate clinically. The existence of so many PPIs is not a scandal - yet - but it's not a glorious chapter in the history of drug research, either.
+ TrackBacks (0) | Category: "Me Too" Drugs | Business and Markets
February 1, 2008
As many of your have noticed, there's been some software upheaval behind the scenes here the last few days. Many comments aren't getting through at all, and the others are showing up in the system, but not on the public site. Hammers and screwdrivers are being wielded, and I hope things are fixed up soon. . .
+ TrackBacks (0) | Category: Blog Housekeeping
1. You’ve got a compound repository, right? Lots of vials, robot retrieval systems if you’ve got the cash, all that stuff? What fraction of those vials are full of sticky stuff that are colored warning shades of dark orange, red, or soy-sauce brown? And of those, what fraction has had colorfulness overtake it while in your repository racks, as opposed to the stuff that arrived looking that way? Bonus question: are you aware of any cranberry-red wonder drugs, and have you ever heard of anyone formulating a big manufacturing batch for Phase III while checking to make sure it’s the right shade of brown?
2. You’ve got some molecular modelers, right? And you ask them to try to dock some of your compound ideas into your favorite binding sites? OK, first question: how many times have any of them come back to you saying that they can’t fit something in? If the answer is “never”, you have a problem with your modelers. Second question: if you do get told that your compound doesn’t seem to dock, do you keep going down the hall until you find someone who can jam your idea into the model? In that case, the problem is closer to home.
3. You’ve got biologists on your project, right? In vitro assay people, then the in vivo group, ready to test whatever makes it that far? So, how much compound do they ask for? And how much of that do they plan to actually use? When they ask, how much compound do you tell them you have on hand (or can make)? And what fraction of what you really have or can make is that, exactly? Depending on the ratio between these various answers, you can either have no problems or you can be living with quite a few different ones simultaneously.
+ TrackBacks (0) | Category: Life in the Drug Labs