About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
November 26, 2002
You may have noticed the recent report about the creation of an engineered stress-resistant rice. This looks like a real triumph of plant biotechnology, and it's the best news I've seen in some time.
The idea behind this work has been floating around for some time (and the Cornell team that succeeded has been at it since at least 1996 themselves.) There's a sugar called trehalose (chemically, two glucose molecules attached roughly head-to-head) that's been known for many years as a biological preservative. Everything that can survive severe drying, from yeast and bacteria on up, seems to produce this stuff under stress. The best guess about its function is that it replaces the water that would normally surround key proteins and cell membranes, and stabilizes them until (and during) rehydration. Trehalose tends to form a noncrystalline glassy solid state, which probably is what happens inside the cells. (By the way, it's completely nontoxic, and found in many foodstuffs already.)
Plants have been engineered to produce it before, but there have been problems. If the sugar is produced all through the plant, there's often some stunted growth or other odd effects - these plants showed drought tolerance, but that correlated pretty well with how weird they looked. One key was to get the gene expressed only in chloroplasts, the cholorphyll-containing organelles that do the metabolic heavy lifting (in the same way that mitochondria do it outside the plant world.) That has the added advantage of making the gene(s) much harder to transfer to other plants in the wild.
The Cornell team managed to get a lot of control over how the sugar is expressed - with different genetic promoters, they can cause it to show up in different parts of the rice plant, or under different conditions (only under stress, for example.) That and an improvement in the gene that was used seem to have done the trick.
So what sort of rice plant is this? One that actually seems to do a better job of photosynthesis, for reasons that aren't really clear yet. One that can take salt-water conditions, stand 10-degree lower temperatures, and stand up to ten-day droughts. Any of these will kill a normal rice plant, but these survive. This is going to open up huge marginal areas to cultivation.
Do you suppose the European Union will ban these plants? Can you just see activists pulling them out of the ground? Do you think it's any coincidence at all that this result was realized in a country that sets researchers free? The Cornell group has already announced that they're going to release this technique to the public domain, as a benefit to mankind. The same technique looks to be applicable to corn, soybeans, wheat - you name it. If the promise of this work is realized, a huge step has been taken to alleviate human suffering. I'm as happy as can I can be about this, and I'd like to salute the people who made it happen. And to take a little time to reflect about what tremendous things can be accomplished, even here at the beginning of our knowledge. . .
+ TrackBacks (0) | Category: General Scientific News
November 25, 2002
After some years in the drug discovery business, I have a few modest requests to make. To start with, I think that the existing periodic table is too limited, with a number of elements that clearly were left out. I have a shopping list of what's needed to complete the set, and here's number one:
We need something electron-donating that's no bigger than a fluorine. Medicinal chemists, when they hear that, usually say "Hmmm. . .yeah!" That is, when they're not telling me I'm nuts. For those outside the field, I'll explain: when you want to substitute something for a hydrogen on a carbon atom, fluorine's usually your boy, because it's not that much larger, and you get a lot of other effects as well.
Untold numbers of useful things have been made this way. The carbon-fluorine bond is a lot stronger than the C-H bond it replaced, and the fluorine has some odd properties all its own. (You go from polyethylene to teflon when you make that substitution, to pick one example - I wouldn't recommend buying a polyethylene-coated frying pan unless you want your next batch of scrambled eggs to take on a completely new dimension.) The lower reactivity means that fluorocarbons generally don't burn worth a hoot, and it lets med-chem folks like me stick fluorine atoms all over our structures to keep the liver from ripping them to shreds. Not even the liver seems to be able to tear up a fluorocarbon.
(That low reactivity is also why freon-type compounds were assumed to be completely inert for so long. As it turns out, much of the ozone-depetion reaction they set off is caused by some other carbon-chlorine bonds breaking apart in the hard UV light of the upper atmosphere. You really have to beat the heck out of a compound to break C-F bonds, but C-Cl will turn on you.)
Of course, all these interesting properties come at a cost. Fluorination reagents tend to be a bit exotic and expensive, for one thing. That's because they're mostly a way to put a leash on elemental fluorine itself, and a strong leash it had better be, because it's about as nasty as it gets. Plain fluorine will start in on basically every compound there is, and plenty more: it'll make steel wool, for example, burst into flame, so you don't even want to think about what it'll do to your hand. Whatever it does, it does with a huge release of energy; it's just too happy to pitch in. Ask Berthelot, the French chemist who tried to make fluorocarbon compounds in the 1800s by using the same sorts of reactions that worked for chlorine. He blew himself up several times in a row; how he survived is a real mystery. Most of the early fluorine investigators poisoned themselves with the stuff to some degree.
You also pay with some physical properties you might not want. Fluorine owes its weirdness to the way that it crazily pulls electronic charge off of any atom it's attached to. (You can practically hear the electrons being sucked over across the bonds - well I can, anyway, but maybe I've been doing this stuff too long.) That's all fine, but what about the times when you don't want all the electron density piling up on one end of your molecule? Wouldn't it be good to have something that pours charge back into the system, but keeps that handy fluorine size?
Well, I can tell you that if we had this element available, there would soon be some pretty puzzled enzymes and receptor binding sites out there. Medicinal chemists would be putting it on everything and making compounds these poor proteins never even dreamed of. We spend a lot of time adjusting polar groups on our molecules to make them tuck into binding pockets more tightly - something like this would give us work to do for many years. For starters, I'd take all the compounds that got worse when I fluorinated them, and put this stuff on instead. Couldn't hurt.
And there's nothing but the laws of physics to keep us from having it. Maybe some other universe is built so that they have my element in it. If so, there are probably some fellow chemists sitting around wishing that they had something like fluorine. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
November 24, 2002
Craig Venter and his wife Claire Fraser had a controversial effort going a few years ago, known as the "minimal life" project. The question was: what's the smallest number of genes an organism can have and still function? They got reasonably far along with it, then shelved it for a number of reasons. I'd seen interviews with Venter recently where he was mentioning it again, so the news that the project is underway again didn't come as a shock.
You wouldn't want to look for a minimal gene set starting from something as complex as a mammal - or any decent-sized organism, for that matter. They picked a really small single-celled creature called a Mycoplasma. It's a good choice, because they're pretty minimal organisms to start with - they're so small (and can be so hard to detect) that over the years they've been a major pain in the neck as a contaminant in cell culture labs. The species that they chose, for example, has only 517 genes. The plan is to knock them out systematically, one at a time, and see if the resulting organism can survive (and if it can, what its limitations might be.) It's a large project, but not a prohibitive one - before they project was mothballed in 1999, they'd already narrowed the list down to about 300 genes.
There could be some complications: for example, depending on the order in which things get knocked out, you could end up assuming that a particular function is non-removable when in fact it could be part of a system that has to be taken out all-or-none. These patterns will probably become apparent as the work goes on, and should provide some interesting information.
Once they get down to the minimal instruction set, we get into the territory that unnerves people. The amount of DNA that we're talking about is still going to be long, but in relative terms it might well be short enough to produce in a DNA synthesizer. If you do that, and toss it into a cell that's had all its genetic material stripped out, you could bootstrap your own organism. That'll be weird in two ways: it'll be a new species, made up on the spot, and it'll have been made, to some extent, from reagents on the shelf.
There shouldn't be anything unsettling about that, but to many people there is. It's true that a virus has been recently been produced in the same way, but most people (including me) don't really think of viruses as being living organisms. This experiment doesn't bother me, partly because they're still relying on all the already-built cellular machinery to accept the new genetic material. Building a whole cell from scratch would be a much greater effort, one that I really don't think anyone can swing yet. But they will, at some point. . .vitalism dies hard: it's going to be interesting to see the press coverage when it finally bites the dust in wide-screen stereo sound.
Editorials are already appearing - the link takes you to the Washington Post, which seems not to have been keeping up with the pace of molecular biology. It's a little late to worry about "a living thing that is at least partly a human creation," guys. The bacteria that have been engineered into making interferon and insulin for us for years now are partly a human creation, you know, as are uncounted recombinant cell lines throughout academia and industry. If you want to get picky about it, chihuahuas and sweet corn are partly human creations, too: we just took longer to make those because we didn't have very good tools.
Some of that coverage is going to be breathless what-if-the-new-life-form-escapes stuff, no doubt. I think the answer should be clear to anyone who's thought about the biology: what will happen is, the organism will be outcompeted very quickly and die out. Think about it - if it were easy for an organism to survive in the wild with such a small genetic code, there'd be some critters out there doing it. Perhaps when life was just getting going it was possible to get away with it, but not now, after billions of years of fine-tuning. As Neal Stephenson colorfully puts it near the beginning of his Cryptonomicon:
Like every other creature on the face of the earth, (he) was, by birthright, a stupendous badass, albeit in the somewhat narrow technical sense that he could trace his ancestry back up a long line of slightly less highly evolved stupendous badasses to that first self-replicating gizmo - which, given the number and variety of its descendants, might justifiably be described as the most stupendous badass of all time. Everyone and everything that wasn't a stupendous badass was dead.
No, this is going to be one finicky creature, able to survive only where everything is built for its pleasure. For that reason, I'm not sure about its usefulness, either, as a platform for adding new functions. Venter's been talking about using this to engineer an organism that will be able to make hydrogen for fuel cells, or take up carbon dioxide to ameliorate greenhouse warming. I'd have to classify both those reasons as, well, hooey. There are already plenty of organisms that will take up carbon dioxide. You probably have some big ones growing in your back yard, and they're a lot more robust than this organism will be. It'll still be frail enough to be a tremendous headache to culture and keep happy, unless you start adding stuff back into it to make it more robust and free-living. And if you're going to do that, why not start off with something that's already been optimized for being robust and free-living? Don't get me wrong: there are a lot of good scientific reasons to do this work. I just worry that the explanations offered for the general public are - at the very least - inadequate.
The whole project could be explained in terms of cars and trucks: what we have here is an attempt to disassemble a small car down to the most primitive conveyance possible, by removing parts one by one until nothing extraneous remains. This stripped-down go-cart will indeed be a new vehicle, one that's so simple that it could be built from things lying around the house, stuff that you wouldn't normally associate with cars at all. People that think that you need a huge factory to build a car will be amazed. But this thing won't stand a chance on the open road, and will probably barely make it around your back yard on a warm day. You'll be able to turn it into any sort of motorized thing you want, by adding stuff back on to it - but you might be better taking one of the more complicated, capable cars that zip around on the main roads and work on them instead. (It's been a while since I came across a metaphor I could, uh, ride that far. . .)
+ TrackBacks (0) | Category: General Scientific News
November 21, 2002
There's now a nice review of Timothy Ferris's new book Seeing in the Dark by Freeman Dyson, who's a scientific hero of mine. (That's a Salon link, so get it while you can. . .) The theme of the book is how amateur astronomers are more and more able to make contributions at the forefront of the science. (They've always been doing so, of course, but it's gotten much more possible in the last ten or twenty years. The areas that amateurs can have an impact in have expanded, as well.)
Dyson goes on to talk about the division between fact-gathering and theorizing, and how the balance between the two changes as a science matures:
"It appears that each science goes through three phases of development. The first phase is Baconian, with scientists exploring the world to find out what is there. In this phase, amateurs and butterfly collectors are in the ascendant. The second phase is Cartesian, with scientists making precise measurements and building quantitative theories. In this phase, professionals and specialists are in the ascendant. The third phase is a mixture of Baconian and Cartesian, with amateurs and professionals alike empowered by the plethora of new technical tools arising from the second phase. In the third phase, cheap and powerful tools give scientists of all kinds freedom to explore and explain."
This sounds about right to me. Dyson also makes a point about how Eastern and Western approaches stalled out for many centuries through an imbalance in these approaches: in the West, theorizing held sway and grubbing for facts was seen as irrelevant (think of the hold of the old Greek texts and of religion.) In the East, the Chinese and the Islamic world accumulated a good deal of interesting data, and happened on some incidental technology along the way, but didn't spend much time trying to develop theories that could have extended the research.
(Incidentally, I've seen this imbalance at work in my own field. One research project I worked on was run under conditions where you had to have a rationale for almost any new compound series you tried. I spent most of my time, like everyone else, exploring around things that we knew worked well, but I always reserved time for trying out things just because no one had tried them before. Unfortunately, messing with some part of the molecule just because we had no idea of what would happen wasn't seen as a good enough reason - you had to have some theoretical underpinning. Arrogant foolishness, considering what the theoretical state of medicinal chemistry is like.)
So where do the various sciences stand in their development? Dyson again:
"Astronomy, the oldest science, was the first to pass through the first and second phases and emerge into the third. Which science will be next? Which other science is now ripe for a revolution giving opportunities for the next generation of amateurs to make important discoveries? Physics and chemistry are still in the second phase. It is difficult to imagine an amateur physicist or chemist at the present time making a major contribution to science. Before physics or chemistry can enter the third phase, these sciences must be transformed by radically new discoveries and new tools."
He's got that right. At the moment, you really need some serious equipment to go after most of the unusual stuff in either field, but I have to say that he might be on to something. Chemical instrumentation is becoming smaller and more self-sufficient all the time, and if the trend continues, it's possible to imagine a wealthy amateur having a high-field NMR and HPLC-mass spec capabilities in the basement. Zoning laws permitting, of course. (Actually, that sounds like a lot of fun, but maybe it's just the "wealthy" part I'm thinking of.)
Dyson's own bet for the next science to shift is biology, and I think the point is inarguable. It's nearly a cliche in the field to be amazed at how far it's come: for years now, high school students have been doing experiments that would have frizzed the hair of the 1975 Asilomar participants. You can do PCR in your kitchen, if you're so minded. The molecular biology supply companies have been steadily making everything more out-of-the-box, selling kits and systems designed both to make the lab worker's job more easy, and to make the companies more money. (There's Adam Smith's invisible hand for you. . ."It is not from the benevolence of the vendors of DNA primers that we expect the success of our hybridizations, but from their regard to their own interest.")
Dyson pictures legions of homegrown DNA tinkerers, a vision I find simultaneously thrilling and alarming. That's the authentic feeling of the future, though - it's hard for me to trust any substantial prediction that doesn't bring on those emotions. He's probably right, and we'd better keep on learning how to deal with it.
+ TrackBacks (0) | Category: Who Discovers and Why
November 20, 2002
I was talking with a friend at another company, and we both had occasion to recall executives we've heard who seemed unable to give a coherent speech. You've heard the sort of thing: unfocused thoughts drift by, like plastic bags being blown around an abandoned lot. . .no thought makes it all the way through a sentence before another one lands on top of it, splintering it with an irreversible crack. . .main points are composted under a heap of irrelevant clippings. I've spoken about this before, in reference to Sam Waksal and people who are very smooth at presenting their own work in the best possible light. This is at the other end of the scale - people who should, you'd think, be a lot slicker than they are.
How does someone get to a position like that and remain so inarticulate? This question comes up in politics as well, and in both cases I think it's because the person must be a lot better one-on-one or in small groups. There are plenty of people who can handle themselves well in a conversation who can't give a decent talk. (Not that I really can relate to that - I'm rarely tongue-tied, although I can think of a few times when I would have been better off that way.)
There are plenty of professors that bring the same question to mind, of course. I can't safely quote from the executives that I'm thinking of, but here's a sample of one of the worst professors of my experience. (Note: the subject matter has been changed to a cake recipe, to protect the guilty.)
"OK, you remember that last time we were going to learn how to mix a - well, I think I told you that we were going to try one of these, and if I didn't, then - this is a little like the stuff that we're actually going to get to next week, except that that doesn't have so many eggs in it, because eggs, well, eggs are a tricky thing because they have, they have the protein in them that makes stuff - well, that's not something that we're going to get into for a while, but at any rate you may know that the egg white has a lot of, a lot of properties that are really useful when you try to whip things up with a lot of air in them, which is, which isn't what the cake today really has, actually, because this one starts out with this mixture that I think I told you about last time - I'm not sure if we finished the entire thing, so just try to remember where we left off and sort of, sort of. . ."
I'm not exaggerating. I can round up witnesses that heard this person lecture - not on baking, but by the time he got through making a batter out of his chosen field, it might as well have been. You'd have been able to learn just as much about layer cakes as you could about the subject matter of the course. I would sit there for the entire hour and no note-taking impulse would ever trigger my hand to move. I've seen some good lecturers, and plenty of mediocre ones, but I've only come across a couple that actually could do you harm. You could feel yourself becoming less intelligent as you sat there; the only way to handle the course was to make sure to miss as many lectures as possible.
So, at what point does someone really think like they talk?
+ TrackBacks (0) | Category: General Scientific News
November 19, 2002
The Advertising column in today's Wall St. Journal has an interesting note on some recent Merck ads for their COX-2 inhibitor, Vioxx. What they've done is break their TV ads up into pieces. Why would you do that? Well, the FDA rules are that if you mention a drug and what it's good for in the same ad, you also have to list the possible bad side effects - the sort of thing that's in the package insert. And since every drug has side effects, that's a real problem for an advertising agency - finding something to do on screen while the voice-over announcer drones on quickly about flatulence, night sweats, and other appealing topics.
Roche tried this about a year ago (see below,) and Merck seems to be using nearly the same strategy. One ad shows Dorothy Hamill skating, while she talks about how sometimes she has arthritis pain in the morning - followed by an announcer saying that you should ask about new medicines that could help, and giving a phone number for Merck. The other ad has Dorothy Hamill skating, and mentions a medicine from Merck called Vioxx - with another Merck phone number to call. The drug and its intended therapeutic use never get mentioned in the same ad.
Roche got into trouble with this little innovation. They were advertising Xenical (orlistat,) which in my book is a pretty tough sell under the best of circumstances. In case you don't know the mechanism, that drug inhibits pancreatic lipase, the enzyme that's secreted into the gut to break down fat. It would inhibit most any other lipase it got ahold of, for that matter, but it doesn't make it out of the gut, so pancreatic lipase it is. No triglyceride breakdown, not much fatty acid absorption into the body, not much fat from the diet. What could be easier?
Well, that fat has to go somewhere. And in the GI tract, if it's not absorbed, there's only one place for it to go, and the consequences can be most unpleasant. Believe me, taking a pancreatic lipase inhibitor and then pigging out on a bucket of fried chicken will bring on a really unforgettable experience. I will go into no more details, in the interest of maintaining this blog's dignified facade, but will refer you to this page of side effects. Note that none of these are terms that you'd want to mention in your TV commercials if you could posibly help it, especially not ones that run during the dinner hour.
So Roche broke up their ads. One ad mentioned the word "Xenical" and the other ad mentioned weight loss. Otherwise, the two ads were extremely similar - same images, same announcer, colors, fonts. And Roche arranged to have them shown back-to-back (or nearly so) in case anyone missed the point. The FDA sure didn't, and prevailed on them to stop. Merck, on the other hand, seems to be making sure that their ads don't get run back-to-back. The FDA is apparently thinking about whether these are really separate ads, or just an attempt to get around the advertising rules.
I can save them the trouble: of course this is an attempt to get around the rules. Whether these rules should exist in their current form (or exist at all) can be debated; I get e-mail from people who say "ban 'em all" and I get some that say "let 'em all loose." But they exist right now, and there's no arguing about that. And if the FDA lets these through, they should prepare for this to become the standard advertising method for the whole industry.
+ TrackBacks (0) | Category: Business and Markets | Diabetes and Obesity
November 18, 2002
Not much time to blog tonight, and things are busy at work, too. We're wrapping up one project in my lab, trying to start another, and I'm working on a paper about another (older) one. Days like this, my brain needs some new transmission fluid.
I'm still trying to decide where to send the finished paper. Different journals require different levels of experimental detail, and if I want to send this to the Journal of Medicinal Chemistry, where I've never appeared, I'm going to have to round up some more data. Worth it, or not? It depends on how easy it'll be to find all these compounds, which have migrated to distant parts of my lab by now.
For full papers, J. Med. Chem. wants combustion analysis (% of carbon, hydrogen, and oxygen, determined by burning the stuff - a job for a specialty lab) for at least the key compounds. I consider them more trouble than they're worth, although they can help, of course, if you're trying to see if you have a particular salt form or a stable hydrate. By far the majority of combustion analyses I've had to obtain were back during my PhD writeup, and at the rate I'm going, I think I'll be able to say that for the rest of my life. The same goes for optical rotations, and good riddance to those, too. Yep, Mister Purity, that's what they call me around the lab. . .
+ TrackBacks (0) | Category: Life in the Drug Labs
November 17, 2002
An evergreen struggle inside a drug research company goes on between the chemists and the people who have to formulate their compounds. (By "formulate," for those outside the business, I mean "put them into something that'll allow them to be reproducibly given to animals." This can be a pill, but in research it's more typically a liquid dose of some sort.) Sometimes the chemists deal with the same biology-side lab that doses the animals, sometimes with a separate formulations group - but the interaction doesn't change much.
"Stop giving us these insoluble gumballs!"
"Well, stop trying to make everything go into water! It's an organic compound, y'know."
"Water? What's that? We haven't been able to take anything up in water around here since 1982! We're lucky if your stuff goes into boiling DMSO!"
"That wouldn't be much worse than that last vehicle you came up with - remember, that brew that killed the entire control group? You might as well have used drain cleaner."
"Hah! Even drain cleaner wouldn't dissolve that brick dust you guys keep turning out."
And so on. . .I've had quite a few conversations like this, mostly in fun. But it's not always that we chemists make a compound that the formulations people can live with, that's for sure. In the best possible case, your compound just goes into solution. Solutions are reproducible; something's either dissolved or it's not. A solution in plain water would be ideal, but nothing in the projects I've worked on has ever done that, or even come reasonably close. More often, there's something more organic-compound-friendly like polyethylene glycol (PEG) in there with the water. You can half-and-half those just fine for most animals, and a lot of compounds will stay in solution that way.
As folks get more and more desperate, there are additives that can help to keep a compound from crashing out. Various detergents, polysaccharides, and long-chain goos are out there, and finding the right gemisch is like working on a bake-off recipe. Some of these extras aren't tolerated well in specific animal models, though, so there are always constraints. And if you're dosing intravenously rather than orally (which at some point all projects need to do, to get some crucial data,) then the list of acceptable vehicles shortens dramatically.
The next stop for the difficult ones is a suspension. (This is only an oral dosing problem, of course: injecting a suspension into a vein is a reliable way to kill an animal, or a person.) These formulations get trickier to work with, because suspensions come in all sort of guises: from pearly stuff that looks like shampoo to things that look like chicken noodle soup, complete with the noodles. The best way to handle these things is find something that your compound will dissolve in, then add a defined amount of water to crash it out. That gives you the best shot at getting the same thing every time, although you have to be careful to add things in the right order, with similar stirring technique, and so on.
Starting from a powder and suspending that in some vehicle is much more troublesome. Particle size can vary widely between batches, unless you're being picky about it (which at the early research stage of things we rarely are.) You could even have made totally different solid forms of the compound in different batches, which phenomenon (polymorphism) can really induce migraines. And the time one of these suspensions sits around can really affect its behavior, as the particles start to dissolve a bit around the edges and re-precipitate, or clump together and drift down to the bottom of the vial.
If your compound works well when dosed as a suspension, though, you're still in better shape in the long term. That's a better predictor of how it'll behave as a pill or tablet (which are, after all, lumps of finely milled powder.) On the other hand, if your compound works when given as a solution but not as a suspension, then you have some serious problems that you'd better start worrying about.
+ TrackBacks (0) | Category: Drug Development
November 14, 2002
I recently mentioned the non-cholesterol effects of HMG-CoA reductase inhibitors (statins,) so I thought I'd follow up on that with a discussion of the recent news (Nature, Nov. 7) that they could be beneficial for multiple sclerosis.
The mechanism of MS is clear, up to a point. (I know, everything is clear, up to a point, but bear with me.) It's an autoimmune disease, a T-cell response to the body's own myelin sheaths around the nerves. This inflammation damages the myelin (a full immune assault damages just about anything,) and thus affects nerve impulse transmission. Over time, the neurons themselves are irreversibly damaged (or so it seems; reversal of neurological damage is a hot topic these days, and no one's sure what might be possible eventually.) The course of the disease varies a great deal from person to person, since immune systems vary, too. Current therapy can slow the progression down a bit, but nothing stops it.
The idea that statins might help in something like MS isn't actually new. The drugs have long been known to have some immunological effects: as far back as 1995 â yep, way back then â a study showed that heart transplant patients had a better outcome when pretreated with pravastatin.) Since then, a number of miscellaneous signaling pathways involved in inflammation have been shown to be affected by one statin or another. (So many, in fact, that it was getting hard to sort out what was going on.)
The latest work is a very nice study using a mouse model of MS called EAE (experimental autoimmune encephalomyelitis.) It's a pretty decent surrogate for the disease, brought on by deliberate (and heavy) immunization with peptides that are close enough to myelin's surface composition to set off the autoimmune response. There are several recipes for doing that, some of which only work in specific strains of mice, which cause different types of impairment (more or less severe, chronic versus repeating, and so on.)
The statin used was atorvastatin (known to the world, and to nearby planets if Pfizer's marketing department has anything to do with it, as Lipitor.) I note without comment that one of the paper's authors was the recipient of an "Atorvastatin Research Award" from Pfizer, but their choice of this particular compound was justified. Two years ago, it was found to be more potent on immune targets in vitro.
Giving the drug before symptoms set in was effective at lessening them. In fact, the statin even helped after waiting until the peak of the illness, which is a pretty severe test. All this was confirmed on the tissue and molecular levels; the results look very solid indeed.
So how does it work? Probably not through cholesterol lowering per se. But the HMG-CoA reductase enzyme that the statins inhibit produces mevalonate, which is a molecule that does seem to have some effects on immune function. Outside of that whole pathway, statins seem to affect production (although it's not clear how) of a regulatory protein called CIITA. That one's involved in presenting antigens to helper T cells, a process very close to presenting a pack of bloodhounds with someone's dirty sock. So it could be that the T-cell attack on myelin is thrown off at the very beginning.
There are other mechanisms, not mutually exclusive. Statins have also been shown to affect a protein called LFA-1, which is known to be important for T-cell migration. Perhaps even if they're on the scent, they get diverted at the last minute by this pathway. (One way to check would be to use pravastain, which doesn't seem to affect LFA-1, interestingly.)
Unraveling all this is going to keep a of people up late in the lab for some time to come. For now, atorvastatin is going into human trials on MS patients. You can bet that as the mechanism comes more into focus that drug companies will be ready to screen their compound banks again, though. Statins are a very good start in this area, but they don't have to be the last word.
+ TrackBacks (0) | Category: Clinical Trials
I've posted a correction in my original post below. And I've also located the paper that got me thinking about the whole thing. These folks knocked out a protein that's needed for nodal cilia to form in the embryo - what they got were nonviable mouse embryos that were left/right randomized. The flow produced by the normal cilia goes to the left, and they believe that this is a key left-right differentiation pathway. (You can see Quicktime movies of the cilia here.)
So although I botched the prokaryotic/eukaryotic cilia/flagella details, I think my point stands. I've had some interesting e-mail about my thoughts on human perception of chirality - as soon as I wrap my brain around them, I'll post some more. For all six or eight of you that care, that is - now that's service!
+ TrackBacks (0) | Category: General Scientific News
November 12, 2002
. . .here's a question that has bothered me: How do we know our right from our left? No, really. The more I've learned (and internalized) about chirality, the more tricky this question gets. (Until you've thought about handedness and non-superimposibility for a while, these things just seem natural, of course. You have to train yourself to get this weird.)
But what we learn in chemistry is that chiral objects cannot be distinguished by an achiral probe. Fro example, you can't use plain silica gel columns to separate enantiomers; you have to pay through the nose for columns with chiral stuff on them. The thing is, we humans use chiral probes every day (our hands,) so we take the ability to discriminate chiral objects for granted. We shouldn't, though, because the next question is: what chiral probe do we use to tell which hand is which?
There, that's the original question restated. We're bilaterally symmetric, right? In stereochemistry terms, we're meso, with our own built-in reflection plane, and we shouldn't be able to distinguish chiral objects. Of course, we're not really that symmetric. We have identifying marks on each arm and hand, usually, that would give the game away. And faces often have the same sort of thing (sometimes a deliberately applied "beauty mark," which is interesting when you consider that research seems to say that the most beautiful perceived faces are the most symmetric ones.)
But I don't think that that's the real answer to my question. Where we really start to lose symmetry is in our internal organs. As everyone knows, the heart is on the left side (except in rare cases!), and the other organs follow suit in their own positions. The organ that I'm thinking of is the brain, which looks rather symmetric, true, but is about as full of handedness as an organ can get. If you're right-handed, as is well known, you do a lot of your verbal processing in your left hemisphere, and a lot of non-verbal work in your right. And your eyes each feed into the crossover hemisphere, (which has allowed some really alarming experiments with brain surgery patients that we'll have to talk about some time.)
And that's where I think the origin of our ability to perceive chirality lies. Our information-processing organ itself is chiral. But let's keep going: it's worth asking how the brain (and the rest of our internal arrangement) got that way, when you consider that we started out from a single cell (which then divided straight down the middle.) A lot of research has gone into answering that, and determining the earliest stage at which the blastula breaks symmetry.
I believe that the latest theory is that molecular signals and growth factors are believed to circulate around outside of the developing cells in a handed fashion, and that this may be the origin of the asymmetry. So where does this chiral flow come from? Well, it's driven by cilia on the cell surface, and it's well known that these always turn in one direction. (The mechanism (PDF file) is fascinating; it's a true molecular motor. You can just picture it as a Victorian-era machine, all polished brass and oiled fittings.) Correction: This is a bacterial flagellum, not a eukaryotic one. Our flagella and cilia work differently, so this picture (though still very interesting) isn't relevant.)
And the components of this machine are all different proteins. Which means that the direction of their motion relative to each other is determined by their three-dimensional shape, which is determined by the twists and turns of their constituent amino acids. . .which are chiral. And we're back to single molecules again.
So that's how you can tell your right from your left, as far as I can see. Simple, really.
+ TrackBacks (0) | Category: General Scientific News
November 11, 2002
Whenever the topic of drug safety trials comes up, there's likely to be a mention of thalidomide, and rightly so. As with any such event, you find various levels of knowledge among different people, even among those who believe that they have the real information.
Stipulating that we're ignoring the (substantial) fraction of the public that's never heard of it, the next group to clear out of the way are those who believe that the drug caused problems in the US. That's the usual point of articles that mention it, actually - that the FDA didn't approve it for use here, while European authorities did. (This theme repeated itself more than once, unfortunately - I'll talk about another example in a few days.)
Now we get to some really persistent mistakes. Most folks who know about drug discovery will tell you about the lessons of thalidomide as they relate to the chirality of drugs. And most of them have it wrong. The amount of misinformation on this subject is so great - just look on the web - that it'll probably never go away.
(Chirality, for those outside the field, is basically "handedness." Nonchiral objects (like balls) can be switched around freely when they interact with chiral ones - there's no right and left, and no way to mismatch them. Chiral objects, though, (like right and left shoes) aren't superimposable, and you can't substitute one type for another. On a molecular level, living creatures are chiral, because the amino acids and sugars in their cells are - see my March 19 post. Thus, right-handed and left-handed forms of chiral drugs often have quite different effects.)
Thalidomide has a chiral carbon atom, in the middle of what (by present-day standards) is a rather odd structure with two imides in it, which is two more than most folks would like to see in their drug candidates. Like almost all drugs from that era, the compound was sold as a 1-1 mixture of the right- and left-handed forms (enantiomers, to us chemists.) The mistake is the oft-repeated notion that the terrible teratogenic effects are only found in one of the two isomers - had the compound been sold as the single active enantiomer, the story has it, all the birth defects could have been avoided.
Wrong. An article in the October issue of Nature Reviews: Drug Discovery (see page 757) helps to set the record straight. There are two problems with the common wisdom, one of which is that the in vivo studies don't bear it out. It's true that one enantiomer is more teratogenic in mice than the other one, but this work involved high doses, because mice just aren't very sensitive to the compound. Humans are, though, unfortunately, and both enantiomers are equally bad in rabbits, who are similarly susceptible.
The second problem shows that the mouse results are actually a surprise. The chiral center in thalidomide isn't stable under many in vivo conditions, and the compound can be converted to a mixture of both forms no matter which one you start with. In most species, you wouldn't be able to tell if there was a different toxic potential in the two enantiomers at all, because you'd never be able to dose only one.
Interesting, the compound has made a comeback in recent years as a treatment for some kinds of leprosy, and it's being investigated in cancer and several other diseases. It has some unique properties. A big challenge, though, is making sure that no woman who's even possibly going to get pregnant gets near the stuff. . .
+ TrackBacks (0) | Category: Drug Industry History
November 10, 2002
While I'm on the subject, there's another problem with employee rankings, one that doesn't just apply to research organizations. I first came across a statement of it while reading Bill James, who showed how it applies to baseball teams when they decide whether to bring in some veteran player to hold down a position or go with someone from triple-A. It's an over-reliance on normal distribution. (Here's a discussion of the idea.)
People have this mental picture of the classic bell curve - bulging middle, sloping down to the sides as it tails off to the few stars on the right, the few destructive losers on the left. In a departmental performance evaluation, you end up with most people getting "Meets Expectations" or the equivalent, some that rank higher, a few that rank lower. In my experience (two large drug companies,) most of the raw evaluations come back as "Meets" or higher, and it's rare that folks get initially ranked on the low end. That gets changed as the whole department comes into focus, though, because there's often this feeling that you have to rank some people low, in the same way that there have to be some star performers.
But here's the key: the performance ranking in any organization that is free to hire and fire its own employees will not fit a normal distribution. Why should it? A normal distribution is what you'd expect from a random sample, and I'll assume that most businesses don't hire or retain their employees at random. No, what you have is most likely the far right-hand side of a much larger distribution, the performance ratings of all the people you could have possibly hired for those positions.
One big factor that keeps things from being normally distributed is the entry barrier into a technical field. For the most part, you can't be stupefyingly incompetent and get a degree from a reasonably good research group at a reasonably good school, or be a total bozo and get in the door for a job interview with a decent resume and give a competent-sounding seminar. The total washouts are mostly gone by the time you place an ad in C&E News. Any of them that do send you resumes have a tougher time getting hired, and any of them that you actually hire have a tougher time being retained.
Another factor that bends the distribution is that people are actively trying to improve their rankings from year to year (well, at least some of them are.) No one's striving to slide down the list, that's for sure. The data points of a random sample aren't being told where they landed last time and given incentives to shift to the right.
So I'd say that a realistic batch of performance rankings has a majority at a "Meets Expectations" level, and the remainder stretching out toward the higher rankings. There really shouldn't be many "Below Expectations" people at all, because the whole point of a ranking like that is that they either shape up, or you ship them out.
This also points out the folly of the Jack Welch "rank 'em and yank 'em" style of performance review. You know, find your bottom 10% and fire them all. Other companies that have tried this technique (Ford, to pick a notable example) have found that it mostly sows fear and discord. And I'm sure it did at GE, too, truth be told (although some CEOs swear by it.)
The "bottom 10%" is basically identical to the 60 or 70% of your employees that are doing just fine, minus a few people having a bad review period (a different set each time,) and minus a few genuine losers. If you seriously try to fire this illusory bottom tier, you end up having to make arbitrary, meaningless distinctions on five-page HR forms in order to distinguish them. Find the real losers and heave them out, absolutely. But don't draw a ridiculous line in the sand and then jerk people around because of it. A really good manager should be able to fire someone without hiding behind a bad policy.
+ TrackBacks (0) | Category: Business and Markets
November 7, 2002
As a follow-up to my post about over-quantification, I should really mention one of the things that managers in research organizations would most love to measure: their employees. How good are they? How productive are they? How do they rank, from one to thirty-eight?
The problem is, there's no good way to measure any of this, not that it stops anyone from trying. Performance reviews are a notorious sinkhole for any industry, of course - every heard of a company where people say that their system works? But it's even harder to do for research employees, because of the dice-rolling feast/famine nature of the work.
Here are a few questions that come up regularly: Who's more valuable - the person who has the idea, or the person who reduces it to practice? What if several people had the idea at about the same time? What if the person who made the best compound in the project did it more or less by accident? What if they did it just because someone else told them to? What should be rated more highly - producing a long list of inactive compounds, or a short list of really good ones? What about someone who does really fine work on a project that disappears due to unexpected toxicity? What's more worthy of a high rating - producing new compounds, or figuring out a crucial step to make enough of the ones you already have?
And so on, and so on. So, how do you rank people? By the number of compounds they produce? That biases it, at best, toward people who (for whatever reason) ended up with a chemical series that was easier to ring variations on. At worst, it tilts the rankings toward people who deliberately banged out piles of easy-to-make compounds, even though they knew that they were unlikely to be worth anything.
OK, how about ranking everyone by the activity of the compounds they made? Well, that biases it toward people who are lucky, not to get too delicate about it. At best, it can reward someone who made some of their own luck, by sticking with a good idea. But it can also reward someone who tripped over a gold nugget on their way to pick up some more lumps of asphalt.
Ranking people by what everyone else thinks of them? That can bias it toward those with outgoing personalities. People on large projects who get more exposure will tend to come out better, too, as will people whose labs are on the way to the cafeteria.
Research is just plain hard to measure, and doing it on a regular, timed basis just exacerbates the problem. We spend long periods in this business being extremely wrong before suddenly being extremely right - try adjusting for that! As far as I've been able to see, any system you use will need exceptions, corrections, qualifications - just the kind of thing that numerical ranking was designed to avoid.
+ TrackBacks (0) | Category: Business and Markets
November 4, 2002
The sequel to the Prilosec (omeprazole) patent case (see the October 13 post) is an interesting one. The only company to prevail against AstraZeneca's patents on their tablet coating was a small one, Schwarz Pharma. At the time, I said "Now it'll depend on whether Schwarz can actually get the stuff on the market." One problem was that the other two companies, Andrx and Genpharm (part of Germany's Merck KGaA) had won the right to be first on the market with generic omeprazole. (They're appealing the original decision, and the exclusivity lasts for six months after the appeals court ruling, whenever the heck that might be.)
But rather than put all their chips on that outcome, the three companies have now teamed up, in a move too sensible for me to have foreseen, This "come, let us reason together" spirit was surely quite profitable for Schwarz, although I don't think anyone's seen the terms of the deal. Time is most definitely money in this case, which accounts for the speed of the negotiations. AstraZeneca's trying to get as many people off of Prilosec and on to Nexium as they can, and on the other, there's that six-month clock that'll start ticking. I believe that AZN is ready to get into the generic Prilosec business itself, if need be. Let's see how fast the competition is in getting the pills turned out. . .
+ TrackBacks (0) | Category: Patents and IP
I was happy to see that Instapundit linked to my anti-Rosseau rant the other day. I hope it was therapeutic for everyone. I've received some interesting e-mail in response to it (none, yet, from any dentists.) There was one today from an archaeologist, though, who pointed out that the Indian populations who depending on corn typically ground it by hand between stones. This introduced a generous amount of grit into the resulting mean, which really did a severe job on the customer's teeth over the years. (Having seen and handled some of the grinding stones, I can attest to their grit-supplementing powers. The ones I remember were worn into sloping bowl shapes in the middle, and all that extra rock powder had to go somewhere. . .)
This wear allowed decay to set in even easier, and to cause quicker damage to the tooth once it did. The sugar content of the corn just added fuel to the bacterial fire, too. One thing that I hadn't thought about is what all this powdered rock did to the GI tracts of the consumers. It seems like a surefire recipe for intestinal trouble - I mean, fiber's a good thing and all, but no one's suggesting that folks eat handfuls of polishing compound. Does anyone have any information (or informed speculation?)
+ TrackBacks (0) | Category: General Scientific News
Merck has won the first round in its legal fight to protect their Fosamax (alendronate) patents (see my September 3 post.) On Monday, a U.S. District court found for Merck in the lawsuit filed by Teva. I haven't seen the decision, so I'm not sure if the ruling directly addressed Merck's method-of-treatment claims for the entire class of compounds. I assume that it did, though.
There are more cases pending in other jurisdictions, though - some of these involve Merck's claims to Fosamax protection (in some dosage forms) out to 2018. Teva's already said that they'll appeal this week's decision, so it could be a while before anyone figures out what's going on. Time is on Merck's side, of course, but at the same time, the long time horizon of the patents that they're defending makes them a more worthwhile target to break.
+ TrackBacks (0) | Category: Patents and IP
There seems to be something odd going on with Iressa, AstraZeneca's great oncology hope. In Japan (the only place where it's on the market,) there's been an unusually high incidence of interstitial pneumonia among its patients. The FDA has scheduled a December meeting, almost certainly to talk about this situation and how it affects the US approval process.
It's not obvious, on the face of it, how a kinase inhibitor would lead to increased risk of pneumonia (and increased severity once you get it, apparently.) My first thought was that this had to be some non-mechanism based tox effect, something to do with the compound but not its mode of action. But on further thinking (and further speculating with colleagues down the hall,) I'm not so sure. Since Iressa is involved in inhibiting the signaling of epidermal growth factor, it's conceivable that it could alter the surface characteristics of pulmonary tissue. Perhaps a tissue change makes the bacteria adhere better, or hinders the immune response?
This is, as I mentioned, sheer speculation. I hope it's wrong. But if it's on the right track, then that's further bad news for what (a few months ago) looked like a potentially huge drug. And it would also be a major concern to all the other drug companies involved in epidermal growth factor receptor signaling (and there are plenty.) You can bet that everyone generating clinical data in the area is frantically digging through their records, looking for pneumonia.
+ TrackBacks (0) | Category: Cancer
November 3, 2002
Schering-Plough and Merck have won FDA approval for Zetia, their cholesterol absorption inhibitor that I've spoken about from time to time. That's a big step for them, although approval was pretty much assumed. The drug won't reach its real potential, though, until they can get their combination Zocor/Zetia formulation approved, which is the next big push. (I can highly recommend this article from Forbes, which presents a very accurate portrait of how the drug was developed.)
The two drugs have complementary mechanisms. Zocor is a statin, an inhibitor of an enzyme with the melodious name of hydroxymethylglutarate coenzyme A reductase. It's responsible for a key step in the cascade that synthesizes cholesterol from scratch. Merck was the first company to get one of these inhibitors on the market, and they've done extremely well with them over the years, although their current compound, Zocor, has slipped behind Pfizer's juggernaut Lipitor.
There are a number of statins out there, but one (Bayer's Baycol) was recently pulled from the market. That highlights some of the small (but real) differences between the different compounds: Bayer's compound ran into a similar toxicity problem that has affected the other statins (a rare inflammation of muscle tissue,) but seemingly showed more of it. To the best of my knowledge, no one knows what influences patient susceptibility to this effect, and it's something that hangs a bit over the entire field. Sounds like a good candidate for toxicogenomics, although it's hard to know where to start looking.
Zetia, on the other hand, has nothing to do with HMG-CoA reductase - rather, it inhibits the other source of cholesterol, absorption from the diet. Combine the two, and there really shouldn't be much excess cholesterol left. (I've always wondered what would happen if you really loaded up on both, though, because cholesterol, despite its bad reputation, is essential. Would some endogenous synthetic pathway that we don't even know about suddenly kick in? What would cholesterol deficiency look like, anyway?)
By itself, Zetia doesn't do any better job of lowering cholesterol than a statin does. But in combination with one, you can take the statin at lower doses than you would need it in a monotherapy. This would presumably lead to lower incidence of side effects, so it could be that events have conspired to bring the drug in at just the right time.
One wild card is that HMG-CoA reductase inhibitors may be good for more than lowering cholesterol - for example, there may be a protective effect for Alzheimer's. There's been a cholesterol handling/Alzheimer's link known for some time (a long and knotty subject that I'll take up in some future post.) So here's a question - are the possible side benefits of the statins due to lowered cholesterol only? Or are they due to something else that comes out of inhibiting HMG-CoA reductase? Or is it door number three - some other target entirely that statin-like structures also hit?
There's also the question of how much good lowering cholesterol really does. That's not something that's going to affect Zetia (or Lipitor, or anything else) for a long while. But the dietary fat controversy that Gary Taubes and others have been stirring up makes a person wonder how much good all this cholesterocentricism (new word! you saw it here first!) really does. That's a topic for another day, too!
+ TrackBacks (0) | Category: Cardiovascular Disease