About this Author
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
March 31, 2005
One of my correspondents wrote to ask "What makes carbon so special?" That is, how come all the life we know about is based on it?
There are several qualities that we organic chemists (and living beings) admire about carbon. But before counting the ways, let me start by saying that I'm talking about "life as we know it." As that Steven Benner review article that I spoke about a few weeks ago makes clear, you can imagine chemical domains at other temperatures and pressures that could support life of another kind.
But for the only life we've found so far, the Earthly kind, the temperature and pressure space is roughly bounded by the territory of liquid water. Higher pressures will let it stay liquid up to higher temperatures, and we have organisms that will ride right along with them. Likewise, high ionic strength will let you keep a liquid matrix down to much lower temperatures, and we have that covered here on Earth, too.
Inside this range, carbon has a lot of advantages. It forms very stable bonds to itself, first of all. Forming and breaking them (under controlled conditions!) is one of the major challenges of organic synthesis. Carbon atoms can be strung out to give you virtually any size molecule you want; there seems to be no upper limit and there's no reason to expect one. This is important, because a likely requirement for any kind of chemical-based life is large molecules with structural diversity. Life's bound to be complex, and carbon compounds give you all the complexity you can handle - straight and branched chains, rings, whatever you want.
And those bonds come in more than one flavor. While carbon-carbon single bonds form a 3-D tetrahedral lattice (found in its pure form in diamond), double bonded carbons can all flatten out into the same plane. The best natural example is graphite, made up of flat sheets of tiled carbon rings, full of alternating double and single bonds. The sliding motion of those sheets over each other gives pencil lead its properties. And there are triple-bonded carbons, too, which end up in a straight line. Carbon gives you a wonderful 1D / 2D / 3D building set.
There's another key thing about the element. More structural (and reactive) diversity comes from all the ways that carbon can form bonds with other elements. Oxygen, sulfur, nitrogen, phosphorus and many other elements readily form carbon derivatives under Earthly conditions, and these give you the crazy variety of organic chemistry. We've got solids, liquids, and gases, acids and bases of all strengths, nonpolar compounds and polar ones fitted with all kinds of electron-rich and electron-poor zones, and reactivity all the way from rock-solid to burst-into-flames.
I think that it's much more likely that we'll find life that uses different carbon-based compounds than it is that we find life based on siloxanes or some other framework. Organic chemistry is too useful to avoid. Now, organic chemists are another matter entirely. . .
+ TrackBacks (0) | Category: General Scientific News
March 30, 2005
(This is an update of a three-year-old posting - it seemed like a good time to bring it back.)
Since I was speaking about NMR, it's a good thing for chemists to remember that this tool wasn't always there for us. For those of you not in the field, I can say without exaggeration that we'd have to close up shop without it. It's so valuable that it's crowded out older, perfectly reasonable techniques like infrared spectroscopy.
Fellow chemists, raise your hands: Who's taken an IR spectrum in the last six months? OK, you folks who are characterizing compounds for your dissertations can put yours down. Anyone from Switzerland who's writing a paper for Helvetica Chimica Acta (where every new compound is characterized practically down to how it tastes) can put their hand down, too. Anyone left?
I didn't think so. It's a lost art. I haven't taken one since the late 1980s, myself. It's true that you can see all sorts of structural information in an IR, but why would you bother? NMR will tell you the same thing and plenty more at the same time, things you could stare an an infrared spectrum until your eyes cross and never be able to determine.
For an even more lost art, consider ultraviolet/visible spectroscopy. Go back to the 1940s and 50s and the journals are full of UV/Vis spectra, reproduced in all their near-featureless glory. I took a few of these as an undergraduate, and I don't recall ever doing any since. Inorganic chemists can be interested in these wavelengths, but darn few synthetic organic chemists are.
But don't get me wrong - I'm not asking to return to the days when those techniques meant something. Far from it. But every so often, when we're complaining that the NMR machine is taking an extra couple of minutes to automatically run our sample, it's worth taking a minute to think about the alternative.
+ TrackBacks (0) | Category: Life in the Drug Labs
I thought I'd briefly explain one of my "Ten Questions" from the other day. The old-fashioned qualitative organic tests that I mentioned in #4 are things that were used in the 1960s and before to identify classes of compounds. Various brews can give you color indicators for the presence of double bonds, methyl ketones, aldehydes and the like. Some of them are quite dramatic - Tollens reagent, for example, suddenly deposits a silver mirror layer (scroll down on that link to see it) on the inside of the flask when it goes right.
But no one uses this stuff any more. No one at all, at least not if they can help it. Modern methods like NMR and routine HPLC/mass spectrometry have completely destroyed the usefulness of the old chemical tests, because you can now find out far more about your compound with little or no destruction of the sample.
Some undergraduate courses apparently still have these reactions in their curricula, and the only reason I can see is inertia. I've heard rationalizations about using them to teach reaction mechanisms and so on, but you can do that just as easily with reactions that real chemists actually run in the real world. And why wouldn't you? If you're a student that's been asked to run a battery of qualitative organic tests, you should ask for a refund of your tuition. You're being had.
+ TrackBacks (0) | Category: Academia (vs. Industry)
March 28, 2005
More pre-ASCO-meeting oncology trial news, this time from Ligand: their Targretin (bexarotene) failed all its endpoints for lung cancer. Two Phase III trials both came up empty for a survival benefit, sending the stock down a good 25% or so. (As you can imagine from that cliff dive, Targretin is a big part of Ligand's fortunes.)
And it must have been a rough place to work for the last year or so. Last spring, the mood was a lot more festive - see this story from the San Diego media. Let's take a painful trip back in the time machine:
"Three separate studies showed that combining Ligand's Targretin with the widely used Taxol (paclitaxel) prevented or reversed acquired tumor cell resistance to Taxol in non-small cell lung cancer (NSCLC) and advanced human breast cancer cell lines. . .In the Ligand studies of NSCLC and breast cancer cell lines, when Targretin was combined or used concurrently with the chemotherapy, it resulted in superior reduction of tumor growth than did chemotherapy alone. . .Earlier this month, the biotechnology firm reported its first quarterly profit ever. Ligand shares (Nasdaq: LGND) gained $1.40, or 7.3 percent, to close at $20.56 today after notching a 52-week high of $20.85 during intraday trading. The firm has a market cap of $1.51 billion. Trading was heavy, nearly triple the average daily volume of 1.5 million shares. The stock has more than tripled in price in the past year, after hitting a 52-week low of $6.20 on March 31, 2003."
Contrast that to the company's home page today, whose downer headlines are "Targretin Fails to Meet Primary or Secondary Endpoints in Pivotal Trials" and "Ligand Announces Delay in Filing of 10-K". And the stock has round-tripped back to the six-dollar price. How did this happen?
Well, Targretin an interesting and risky drug, in an interesting and risky therapeutic area. It's an activator of the RXR nuclear receptor, which is at a huge multilane intersection (PDF file) of gene expression pathways. I've worked in this area myself, and it's an exciting mess. There are surely thousands of genes whose expression levels can be sent up or down by changes in RXR, and it's a safe bet that we don't know what many of them are. While we're at it, it's a safe bet that we don't understand a lot of crucial things even about the ones we've heard of. It's a real monkeywrench of a drug, which puts it in the same position as many other oncology therapies.
The drug was approved several years ago as a second-line therapy for small indications like cutaneous T-cell lymphoma, and since then Ligand has been trying to break it through into the larger markets like lung cancer. They still have a number of other combination trials going on, as they should, but these results have to hurt. In today's press release, you find only the echoes of last spring, phrased in the saddest verb tense there is:
"The initial daily dose of Targretin in both trials was similar to that used in prior phase II studies in which a positive trend in survival had been observed. . ."
+ TrackBacks (0) | Category: Cancer
March 27, 2005
1. Did the American Chemical Society realize, when it started the journal Organic Letters, that all it would manage to do is turn the competing Tetrahedron Letters into a European journal and force libraries to subscribe to both of them?
2. Can there be a more worrisome lab nickname than the one given to one of my wife's former co-workers, a radioisotope user known as "Mister Chernobyl"?
3. What percent of chemists are the same type as me: willing and able to start new reactions all day, just so long as I don't have to work them up and purify the products?
4. Are there still undergraduate organic chemistry courses that do a lot of the old qualitative tests in their lab sections - Tollens reagent and all that? (I deeply hope that no one answers "yes" to this one.)
5. When you're scheduled to speak with a visitor from academia, is there any more surefire conversation starter than "How's the funding going?"
6. Have I really been looking at the same gel in all those molecular biology lab presentations over the years, or does it just seem that way?
7. Will I ever have cause to use a spinning-band distillation column again in my lifetime? (Bonus question: at what point will I be the only person in my workplace to have ever seen one of the things?)
8. Did anyone ever actually use those crazy bar-code things, condensed versions of the tables of contents, that the Pergamon journals used to have on their first pages? (This is equivalent to wondering if anyone's ever made the "Mock Apple Pie" recipe on the Ritz cracker box.)
9. Is there any reagent more pyromaniacal than dimethyl zinc? (The stuff makes most of the other flame-spouting reagents look like chicken broth.)
10. Could there be anything less likely to attract members of the opposite sex than reading the Journal of Organic Chemistry while you're at the laundromat? (Yes, I did this in graduate school, and I can attest to its powers.)
+ TrackBacks (0) | Category: Life in the Drug Labs
March 23, 2005
Charly Travers is a very clear-eyed observer of the pharmaceutical investment world, and after mentioned Bayer and Onyx yesterday, I'd like to recommend his take on the situation:
"So while sorafenib looks like a promising therapy for kidney cancer, I'm not sure there is much upside remaining for investors looking to buy into a good drug program on the basis of this market alone. With approximately 35,000 new cases per year in the United States, this is a small market. Because profits are split with Bayer, those profits retained by Onyx may not be sufficient to increase the value of the company to a degree that warrants the risk inherent in biotechs at this stage.
The wild cards here are that sorafenib is also in late-stage trials for liver cancer and melanoma. If those trials come out positive, then Onyx is very likely going to be a long-term winner. Cancer drugs often don't work in all cancer types, so despite the encouraging results in kidney cancer, success in these other indications is far from a sure thing."
All true, although there is that factor we were speaking of yesterday, of off-label use. But is that something you want to bet the farm on? It's very important, if you're an investor, to decouple a company (and its drug portfolio) from its stock. They don't always move in synch. I've never been a fan of investing in something just because it's going up, and you have to decide if a company's shares already have most of the good news priced in.
As Travers goes on to point out, you're better off owning smaller companies with earlier-stage drugs that haven't gone up yet. Of course, you really need to own several of those guys, because it's for sure that a majority of them aren't going to work. But that's the kind of portfolio we own inside each drug company, with our own development candidates. Welcome to the club!
+ TrackBacks (0) | Category: Business and Markets | Cancer
March 22, 2005
Speaking of cancer trials, I mentioned the other day how they tend to be smaller than those for many other diseases. But that doesn't mean that they're always easy to run, as a search for "clinical trial design oncology" will show. Note the number of people offering to help you out, via seminars, consulting visits, books, and entire journals devoted to the topic.
The problems start early. Patient recruitment is a big problem for many of the less common types of cancer, and it's getting to be a problem for the better-known ones, too. If you look at all the therapies that are being aimed at breast cancer, for example, and run the numbers, it looks like there aren't enough breast cancer patients in the US to fill out all the trials that would be needed. Cost is, of course, a big reason why a lot of clinical trial work is being done overseas these days, but access to a new pool of patients is a factor, too.
Which brings up another complication - do you want patients who've tried other drugs? That depends on where you're targeting your therapy. If you hope for it to be a first-line drug, you probably want patients that are newly diagnosed. There's a steady supply of those, but not everyone who's newly diagnosed is going to be willing to participate in a clinical trial, not when there might be more proven treatments available. The worst case is when you're looking for drug-naÔve patients with advanced types of cancer. That's feasible (in theory) for some of the ones that creep up on you (like colorectal cancer), but next to impossible for some others.
But if your drug is going to be a second-line therapy, then you should go ahead and see how it performs in patients who've already been through the first-line stuff. There is, unfortunately, a steady supply of those people, too, and they're often more willing to take a chance.
Your clinical trial design will also be influenced by the kind of cancer you're hoping to treat. If you're looking at a very specific type or two, as is the case with Novartis's Gleevec, you may have to cast the net pretty widely to round up enough people. (We'll ignore the fact, for now, that Gleevec sells a billion dollars a year, which means that a lot of people are getting it when it has very little chance of doing anything for them.) If you have a new mechanism that hits all kinds of cancer cells, then you may want to dip into all sorts of different patient populations to see if one of them looks like a good place to take your stand in later Phase II and III trials. The danger in doing that is that your patients may be such a mixed bag that you can't get good statistics on anything.
Ah, statistics. You'll have noticed that I'm referring to cancer patients as if they were so many terms in an equation, which from the standpoint of drug development is exactly what they are. That comes across, to those outside the medical and scientific areas, as a pretty cold way to talk. Guilty as charged - but keep this in mind: people who work for drug companies get cancer, too, as do our friends and relatives. And we're just as upset as anyone else when that happens. But without the icy numbers, and lots of them, we're not going to be able to do anything to help.
+ TrackBacks (0) | Category: Cancer | Clinical Trials
March 21, 2005
Get ready for the clinical trial season for anticancer therapies. The American Society for Clinical Oncology meeting is in May, and everyone who's going to present new trial data is starting to work on their PowerPoint slides. ASCO (famously) embargos the presentations themselves, but the run-up to the meeting is out there in plain sight.
For example, Bayer and Onyx will be talking about their drug, sorafenib. Promising interim data was released today - well, a summary was, anyway - which means that this will be an eagerly awaited ASCO talk. Meanwhile, Schering AG and Novartis were planning to make a big splash with one of their therapies, PTK-787, but the bad news came out on that one almost simultaneously. They'll still be presenting, but the mood will be quite a bit more somber.
All these trials have surrogate endpoints and real endpoints. The real ones are the ones that matter, but you have to wait until the end of the trial for them. Surrogates are designed to be read out more quickly. In the case of these drugs, the surrogate marker is whether the drug seems able to delay progression of the cancer. As time goes on, you'll get actual survival rates, which are the real story, but investors are desperate for news and will take whatever they can get. For their part, the companies will use good surrogate data as a signal to go ahead and start putting the FDA package together, which saves time. Competition in this area is such that time equals money even more than usual.
The problem with surrogate markers is that you can get fooled. It's possible for a drug to look like it's delaying cancer progression, but in the end to not have much effect on survival rates. (Just to really drive everyone nuts, the opposite situation is possible, too.) That's where AstraZeneca's Iressa ran into trouble. As the long-term data starting coming in, it became clear that the drug really didn't help with survival. And if your drug doesn't do that, the patients and insurance companies would like to know, quite correctly, just why they should pay for it.
+ TrackBacks (0) | Category: Cancer
March 20, 2005
I'm still too stuffed from an Iranian New Year (Nowruz) feast to do much blogging tonight, or much of anything else. (And eid-e shoma mobarakh to any of my Farsi-speaking readers, while we're on the subject.)
I hope to be moving around well enough tomorrow to finish up a reaction I did on Friday, an old-fashioned zinc metal reduction. Doing that kind of chemistry really makes me feel a connection to the 19th century. Those folks would have understood instantly what I was up to last week, stirring zinc powder in a flask with some dilute hydrochloric acid and washing it off.
That's how you make activated zinc, and that's how people have been doing it for well over a hundred years. On long storage, it (like all the other reasonably active metals) forms an inert coating of oxide on its surface. That's all very well for something like stainless steel - in fact, that's exactly how it remains stainless - but it really interferes with other metal-surface reactions. A quick etching with acid freshens the stuff right up.
Metals that are more active than zinc (sodium, say) form oxide coatings even faster - while you watch, basically. Those metals tend to come in chunks, since they have the consistency of cold butter. Zinc you can buy as a fine powder, but sodium won't put up with being milled (although you can buy it as sort of a fine sand, if you so desire.) To get a fresh surface for metals like that, you can just cut off a fresh hunk and get it in the reaction quickly before it clouds over again. (Don't do it like this, though!)
My reaction was done in straight acetic acid, another nostril-flaring old favorite. When I worked it up partway on Friday, it appeared to have done what I wanted it to, which goes to show how reactions like this become classics in the first place.
+ TrackBacks (0) | Category: Life in the Drug Labs
March 17, 2005
I wanted to take a moment to congratulate Amylin on the FDA approval of their diabetes therapy Symlin. I've known a couple of people out there over the years. Actually, a lot of people have had a chance to know someone at Amylin, because the company has had more than one near-death experience, during which they've had to fire big swaths of their staff. A couple of years later, they'd start hiring again, until the next safe landed on top of them. I'd like to know just how many employees have been with them for more than ten years - you could probably count them on one hand.
As this look back at the Motley Fool mentions, the drug has been through six Phase III trials over the years. That's a level of perseverance that borders on the pathological. You could only get away with this with a biological product, where the barrier to going generic is so high. A small molecule would have about ten minutes left on its patent by now, and its value would be much lower (which gets back to that Forbes column about patent extensions, actually.)
Here's a multiyear chart of Amylin's stock, so you can vicariously experience the thrills yourself.
+ TrackBacks (0) | Category: Diabetes and Obesity
March 16, 2005
Time for another one of Lowe's Laws of the Lab. This time, it's some more advice for synthetic organic chemists, but it applies to many other situations as well: "Think Twice Before You Get Rid of the Old Route, or You Will Spend Months Saving Time."
It's an insidious trap, that one, because you stay busy the whole time (and staying busy in the lab is what it's all about, right?) I've spoken before about how almost any problem can be interesting, which means that you might as well work on ones that are both interesting and important. The same principle applies here: almost anything will fill your days doing research. You can come up with enough work to keep you going 20 hours a day - alternate reactions, different conditions, back-up plans. But to what avail? It can be hard to slow down enough to ask yourself.
I've fallen for this one more than once, of course. I'll hit some roadblock in the science, and think about how to get around it. I think up a plan, and it sounds good, so I'll start in on it - and run into another hitch just trying to get that to work. But there's a way around, that, too, of course - just try this other reaction over here, and if that doesn't work, well, there's a way around that one, too, and. . .
Notice that at some point in there, things go off the rails. If you don't watch out, you'll end up working on the third alternate route to the reaction that could make the second way around the problem in your possible backup route work. Busy? You bet, more work than you can handle! But productive? Well. . .depends on how you define productivity. If, like a fool, you measure it only by notebook pages consumed and flasks dirtied, everything looks fine.
I recall being impressed at one point in my career by a guy down the hall from me, who was working like a man possessed. Every time I went past his lab, he was in there cranking away, looking like a multi-armed Hindu deity with each hand holding an Erlenmeyer flask. Closer inspection revealed the truth. It turned out that he was working like this because he was doing almost everything in the longest, most wasteful way possible. No wonder it looked so much effort. Cutting your lawn with a bread knife is a lot of work, too, and will fill your day up like nothing you've ever seen.
This law of mine comes down to the old advice of "Measure Twice, Cut Once." It's a hard rule to remember, when you've got a box of saws and the wood is just sitting there, daring you to have at it. But it's worth looking through the clouds of sawdust to see if there's any real carpentry going on.
+ TrackBacks (0) | Category: Lowe's Laws of the Lab
March 15, 2005
Well, I'm back from my undisclosed location, ready to see what's been going on at work the past couple of days. I passed the site on the highway on the way home this evening, so I know that it's at least still there. The side of the building containing my lab was still intact, which is always a good sign.
What passes for normal blogging around here will resume shortly. For now, I wanted to point out this article from Matthew Herper at Forbes, who asks the inflammatory question: "Are Drug Patents Too Short?" His point, a valid one, is that clinical trials have tended to get longer, larger, and more expensive, while patent lifetimes aren't changing. And once a drug is off patent, no company is likely to spend the money to study it with much intensity.
The thing is, a patent extension for drug companies has, as Herper well knows, zero chance of being enacted. There are arguments for and against the idea, but we wouldn't even get that far. The inflatable bats and cream pies would come out immediately, and we'd set in dealing with this issue in the time-honored fashion. . .
The other solution is to make the clinical trials shorter and less painful, which is what the whole biomarker idea is aiming to do. So far, though, there's not much to point at in that field, but these are early days.
+ TrackBacks (0) | Category: Patents and IP
March 12, 2005
I wanted to point out that my email address has changed. After fifteen years (!), my old AOL address is no more. DSL connectivity has finally crept into our corner of the world, so now send your comments, complaints, and (not least) offers of gainful freelance goodness to me at: firstname.lastname@example.org.
The fine print: No immediate family members of deceased strongmen, please. No offers to elongate any part of my body will be considered. And I already have all the cheap insurance and discount toner cartridges that one man can handle.
Oh, and the next post here will be Tuesday evening/Wednesday morning. For plenty of science-themed reading material (more than a couple of day's worth, that's for sure), try the Tangled Bank.
Also, William Tozier has an interesting pile of science and art-related stuff here, and the latter is well represented, as usual, over at Two Blowhards. There, I've done my part for productivity.
+ TrackBacks (0) | Category: Blog Housekeeping
March 10, 2005
This weekend there was an interesting article in the International Herald Tribune by James Kanter and Carter Dougherty, on pharmaceutical research in Europe versus the US. I've written on this topic myself, pointing out that most European companies, when they're expanding at all, are doing so in the US rather than in their own countries.
Having pharmaceutical sales in the US is essential, since we still have the least-regulated pricing of any major market. This, for better or worse, is where you come to make up for the price controls in Europe, and the article points out that the European governments like to talk about being world leaders in innovation while simultaneously clamping down on the rewards for it. But why would you need to do the research here as well? Kanter and Dougherty:
"Although the knowledge created by pharmaceutical research eventually spreads across borders, companies have learned that it pays to start in America. Setting up there gives companies with new products a substantial home market, without having to recruit a multilingual, European sales team, or to navigate the patchwork quilt of national rules on marketing and pricing in Europe. Being close to important U.S. medical professionals and other opinion makers from an early stage also helps smooth clinical trials. The United States produces, by volume, far more new drugs than Europe because U.S. research spending exceeds that in Europe by roughly 50 percent, said (Charles) Beever, of Booz Allen Hamilton."
And if you're starting up a small pharma or biotech company, it's easier to do it over here:
"In Europe, where capital is harder to raise and investors less willing to go out on a limb, the story can be getting financing at all. Investors sank $114 billion into U.S.-based companies that pioneered novel drugs over the past decade. Companies clustered around European biotechnology hubs, like Cambridge, England, and Uppsala, Sweden, garnered barely a quarter of that amount over the same period, although some industry leaders have raised substantial funds. The lack of a single European stock exchange and the persistence of a risk-averse investment culture have played a critical role in America's ability to steal a march on Europe, said Sam Fazeli, biotechnology industry analyst at Nomura International in London.
"Unfortunately in Europe, we are only just coming to terms with the fact that drug failures are part and parcel of the life of a biotech company," he said."
Hey, if we want to get technical about it, failure itself is part and parcel of doing any kind of research. And that brings up another reason I think that R&D tends to thrive more over here: by the standards of many Europeans, Americans are not completely sane. I've worked in Europe and with many colleagues from France, Germany, and Italy, and I really believe this. (Some of them have told me as much after they got to know me well.)
In the US, we tend to give chances to wilder ideas than in many other countries, and there's less stigma attached to their failure. That's a little-appreciated feature of scientific progress: it depends on the willingness to look like an idiot. Keep in mind, most of the paradigm-breaking new schemes that people dream up just don't work. You have to be ready to risk your time, your effort, your money and your reputation to get anything big to happen, and that (for many reasons) is just plain easier to do here.
+ TrackBacks (0) | Category: Who Discovers and Why
March 9, 2005
Just how do antidepressant drugs work? The answer you get (and the confidence with which it's delivered) will vary according to the experience of the person giving it: the more experienced and knowledgeable they are, the more tentative and uncertain the answer. I worked on central nervous system drugs for eight years, and I can confidently state that we know just slightly more than jack.
Well, the more, um, standard answer is that antidepressants act by changing the concentrations of key neurotransmitters like serotonin or noradrenaline. That's certainly what they're designed to do, by shutting off metabolic and clearance pathways and allowing serotonin, say, to build up. Underlying all this is a larger hypothesis, one so large that we usually don't even think about it: that depressio is indeed a disorder of those neurotransmitters, a chemical imbalance that could in theory be righted if we just studied the relevant pathways hard enough.
There's been a feeling, though, that we've been a bit too reductionist about this. This view is well stated in a new article in Nature Reviews Neuroscience (6, 241) by Eero Castren. It's a proposal that will appeal to software engineers in particular:
"This new hypothesis, the network hypothesis, proposes that problems in activity-dependent neuronal communication might underlie depression, and that antidepressants might work by improving information processing in the affected neural networks. A key aspect of the network view is the recognition that the principal role of the nervous system is not to handle chemicals but to store and process information. . .Although chemical neurotransmitters are crucial for the transfer of information between neurons, information in the brain is not stored in a chemical form but is thought to be processed by the complex interactions of neurons in neural networks. These networks develop through interactions with the environment, and the neuronal structure of, and neurotransmission in these networks are constantly being refined. . ."
That makes the difference between the two approaches sound bigger than it really is, as Castren goes on to point out:
"It should be noted that the chemical and network hypotheses are not mutually exclusive, but are complementary. As the synthesis and release of several important signaling molecules are regulated by neuronal activity, changes in the activity of neural networks produce changes in the concentration of these signaling molecules. Therefore, although the initial effects of antidepressants are obviously chemical. . .the ensuing adaptive changes in the concentrations of those signaling molecules are tightly linked to the structure of the neural network, and might be a consequence of the altered information processing rather than its cause. According to this view, antidepressants initiate a 'self-repair' process, whereby plasticity in neural networks and chemical neurotransmission indivisibly cooperate and gradually bring about mood elevation."
Rodent studies have shown that antidepressants stimulate the growth of new neurons, and that this correlates with their mood-elevating effects. Brain-derived neurotrophic factor (BDNF), which has long been known as a key signal for neuronal sprouting, might be the player here, as several lines of evidence have begun to implicate it in changes in mood. All this, if true, points to a combination of drug and behavior therapy as the best combination to take advantage of the brain network remodeling, and I think that this is considered the best clinical practice as well.
The author is honest about some of the evidence against the hypothesis, such as the several factors that can bring on rapid (albeit temporary) mood changes in depressed patients. Rewiring a neural network isn't going to be rapid. But these observations don't have to invalidate the hypothesis (although they could), and there are others that support it. For example, antidepressant drugs have a very slow onset of action, an effect that's been noted for decades, and many people have suspected that there must be some sort of slow reorganization going on.
So where does that leave drug discovery folks like me? We're used to going after defined targets, and "Loosening up the synapses" doesn't sound like one. Here's Castren again, and I hope that he's right:
"The hypothesis that mood represents a functional state of neural networks might sound incompatible with the efforts of rational drug development. However, the data reviewed above indicate that the antidepressant drugs that have been used successfully for several decades might function by initiating such plastic processes, apparently indirectly, by influencing monoamine metabolism. It is possible that a similar process could also be initiated through other pharmacological mechanisms, which might become the targets of new antidepressants. . ."
+ TrackBacks (0) | Category: The Central Nervous System
March 8, 2005
Now, today's reaction in my hood looked like some chemistry, and no mistake. Bright yellow-green, and fizzing! Some of my readers may read that description and say "Hmmm. . .ethyl diazoacetate insertion reaction." Right you are. The color is from the diazo group - a lot of small diazo compounds have it, or so I hear. They tend to go ka-boom if handled roughly, so it's not like I have a big data set. Diazomethane (and its trimethylsilyl variation) are the only other ones I've seen, and I won't be broken up if the list doesn't get longer.
The fizzing is the diazo group breaking down, with nitrogen gas being given off as the compound reacts. Having a reaction that gives off a gaseous by-product is a wonderful way to drive the process to completion, since there's no way for the reaction to run in reverse once the gas has departed. And its formation is almost always a big downhill step for the free energy of the reaction, which puts thermodynamics on your side. Of course, that big leap to a lower energy state is something you don't want these things to just make on their own - that's where that ka-boom business comes in.
It's nice to have an indicator of success like those nitrogen bubbles; you know that you're probably not wasting your time. With diazomethane itself, you can watch its color disappear and know that something's happening. Another famous indicator is the smell of a Swern oxidation, which turns DMSO into stinky dimethyl sulfide. I agree with a friend from grad school who said that he'd come to actually enjoy the aroma, since it was the smell of success.
Most reactions, though, don't let you know what they're up to. They just look a little darker (well, often a lot darker) at the end than they did when they started. You have to interrogate them yourself, taking out a drop for thin-layer chromatography or some spectral technique. No problem with doing that, unless your reaction is being run at below-zero temperatures, in which case the sample you're removing has surely warmed up by the time you've analyzed it. It's a common beginner's mistake to check a cold reaction that way and conclude (prematurely) that it's finished. Live and learn.
+ TrackBacks (0) | Category: Life in the Drug Labs
March 7, 2005
Blogging time is sparse tonight, since I'm (finally) starting off Tax Season here at Rapacious Pharma Manor. But I wanted to point people to a longish post by William Tozier at Notional Slurry, on how he became a scientist - and on what sort of scientist he found himself becoming, and what to do about it. His section, midway through, of snapshots of his graduate school days made me shudder with recognition:
But instead I learned over the next few years that Im merely a bad molecular biologist, botanical or otherwise. In practice, at least. In theory, I rocked. Ive been thinking about it a lot lately (for reasons I hope will become clear nearer the end of this vast wordy spume), but for brevity let me portray this period as a series of portentous snapshots:
* In a lab notebook I stumbled across the other day is a photograph of the results from my thirteenth (13th) extraction of plastid DNA from Hosta sieboldiana. I cant recall right now whether the gel is blank, or a flame-shaped smear of random molecular fragments of size ranging from eensy to weensy. Doesnt matter. Neither result is good, though both appear with about equal frequency in each of the 13 attempts I made over 8 months. I think on this one we decided Restriction enzyme buffers too old; need to make new ones. Use Franks as control?" . . .
+ TrackBacks (0) | Category: Who Discovers and Why
March 6, 2005
Just how many chemicals are there? As written, you can find estimates of anywhere from 10 to the eighteenth (pretty big, all right) all the way up to the gibbering, flee-in-terror order of ten to the two hundredth. A range like that makes it clear that no one knows what they're talking about, so the question need to be cut down to size. "How many chemicals are there below a certain molecular weight?" is a good start, and once you set that, you might want to stipulate the list of elements you'll include and whether or not the compounds are stable enough to be isolated.
A group from the University of Berne has just published a paper in Angewantde Chemie (44, 1504 in the English edition) which claims to answer just such a question, namely: "How many reasonably stable compounds are there with up to eleven atoms of either carbon, nitrogen, oxygen, or fluorine?" Should this one come up during your next poker game, you can now answer, in your best Mr. Spock voice, "Approximately 13,892,436." But hold on. Does that number sound low to you? If not, it should - read on.
The Berne group came up with their estimate by computationally assembling graphs which corresponded to all the saturated hydrocarbon backbones up to eleven carbons. Then they systematically replaced all possible carbons with N or O, allowed for double and triple bonds, and substituted all carbons with H or F. So far, so good. These variations generated a low of 4 and a high of 79236 compounds per carbon skeleton.
But they applied a set of mighty strict standards during these operations. Their algorithm rejected heteroatom-heteroatom bonds, except for those found in some aromatic heterocycles, as well as nitro groups, oximes and the like, so no peroxides (and no hydrazines, I suppose, although they're stable.) They also rejected bridgehead double bonds and allenes, and (to my surprise) only allowed triple bonds for nitriles (so no acetylenes.) They also rejected hydrolytically unstable groups - no enamines, no acyclic imines, no acyl halides, no enols and not even any orthoesters.
What this means is that there are plenty of compounds you can order from a catalog that aren't even on the list. Heck, there are compounds that are shipped in tank cars that aren't on the list. Allowing some of these compound classes to gain a foothold would have swelled the ranks a great deal. Moving further past their criteria, you can imagine how out of control things would get if you started calculating in sulfur, phosphorus, and more than one type of halogen atom. I don't know if this team is contemplating that exercise or not; they'll probably have to wait for a fresh crop of grad students before they can even try.
But I've left out a key statistic of theirs, a startling one. Back at that first step, when they graphically assembled those carbon frameworks, it turned out that the huge majority, a full 99.8% of them, had three- and four-membered rings in them. In order not to have a list so skewed toward cyclopropanes and cyclobutanes, they threw all of these out at the very start, leaving them with 1830 basic skeletons as opposed to 843,335 of them. Throwing out the likes of orthoesters and acetylenes, as it turns out, is nothing compared to the massive effect of shedding the small rings.
In this light, as the authors point out by an excellent astronomical analogy, their list of thirteen million stable compounds is actually surrounded and permeated by a huge unseen amount of "dark matter" - all those 3- and 4-membered rings. Many of them might be too strained to be stable, but many others would be fine. They just haven't been explored because they're too much of a pain to make. This, to me, was the single biggest surprise of the whole effort. I knew that there must be a lot of these compounds, but I never would have thought that their possible forms hugely outnumber all the other small molecules I've ever seen or thought of. What else don't we know?
+ TrackBacks (0) | Category:
March 3, 2005
Column VI of the periodic table doesn't start out smelly, but that's probably just because we run on its first element, oxygen. Animal ancestors of ours who felt woozy all the time from the stench of oxygen didn't leave much of a legacy, so we're all pretty positive about it. But when you start moving down into the next rows, everything changes.
Sulfur's next, and its fame as a reeking element is well deserved. Skunks, rotten eggs, burning tires - they all have delightful sulfurous tang, and we have sulfur compounds in the lab that are worse yet. But most people don't think about the elements to come.
The next heavier element in the series is selenium, which most people will have heard of primarily from its presence in health food stores. It is indeed an essential trace element, although I'd think that if your cuisine includes a reasonable amount of garlic (as it should!) then you're getting all the selenium you need. You don't want to overdo it, because this essential dietary factor is also a pretty efficient poison, which is a useful First Lesson in Toxicology right there. (And no, I don't think it's possible to get selenium poisoning from eating too much garlic; I think many other effects would kick in before you noticed any selenium-related problems.)
Selenium compounds are, if anything, more intrinsically noxious than sulfur ones. Imagine a sort of hyperskunk, scattering its enemies before it and making them carom off trees and dive into ponds. The heavier selenium atoms tend to make the compounds less volatile, though, so you don't always get their full bouquet. The smaller compounds get in their licks, though. One of the simpler selenium-rich compounds, for example, is carbon diselenide, an exact homolog of the carbon dioxide in your breath and in your glass of soda. Instead of a gas, the selenide is an oily liquid with a higher boiling point than water. Most of us organic chemists have never seen it.
Which is just fine. The first report of the compound in the chemical literature is from a German university group from 1936, and it was a memorable debut. A colleague of mine had a copy of this paper in his files, and he treasured a footnote from the experimental section which related how the vapors had unfortunately escaped the laboratory and forced the evacuation of a nearby village. The authors stressed the point that its aroma was like nothing that they'd ever encountered.
The compound made a few appearances over the next couple of decades, but one of the next synthetic papers dates from 1963. (That's Journal of Organic Chemistry 28, 1642, for you curious chemists.) The authors are forthright:
"It has been our experience that redistilled carbon diselenide has an odor very similar to that of carbon disulfide. However, when (it is) mixed with air, extremely repulsive stenches are gradually formed. Many of the reaction residues gave foul odors which were rather persistent (and) it should be noted that some of the volatile selenium compounds produced may be extremely toxic as well as foul."
Something for everyone! At least it lets you know when it's coming. Interestingly, in recent years, the compound has actually made a comeback, with more references to it in the past twenty years than in the fifty before. It's been used to prepare a number of odd compounds that have shown promise as organic semi- and superconductors, and there's actually a commercial source for the disgusting stuff (which may be a first.) I'd like to see what they ship it in.
+ TrackBacks (0) | Category: Things I Won't Work With
March 2, 2005
I spoke yesterday about going through lists of chemical structures, looking for ones that we might want to keep in our screening libraries and, simultaneously, for the ones that we never want to see again. There's a paper from last year in the Journal of Medicinal Chemistry (47, 4891) that's an embarrassing reminder of just how hard it is to do that consistently.
It's from an effort led by Michael Lajiness at what was then Pharmacia/Upjohn (and is now Pfizer, which might account for the lead author now being at Eli Lilly, if you can follow all that.) They had about 22,000 compounds to sort through to see if they wanted to purchase them for the screening files, so they broke them out into 11 lists of 2000 compounds each. Thirteen medicinal chemists volunteered (or were volunteered, unless I miss my guess) to go over these lists. Eight members of the team reviewed two separate lists, and one wild man reviewed three.
The authors of the paper took a look at the list of rejected compounds from each reviewer, correctly (in my view) believing that this list is more significant than the list of what was accepted. After all, an ugly structure that makes it through may well never hit in an assay, and if it does it'll go through many more layers of scrutiny. A structure that's rejected, though, disappears from the company's screening universe forever. False negatives could have bigger consequences than false positives.
So, when more than one chemist went over the same list of 2000 compounds, how similar were their reject lists? Not very! On the average, two medicinal chemists would agree to reject the same compounds only about 23% of the time. (I knew that the overlap wasn't going to be perfect, but that's a lot worse than I was expecting.)
To continue the punishment, the lists had each been, without the knowledge of the reviewers, seeded with the same set of 250 compounds, all of which had been rejected by a previous review. The chemist-to-chemist rejection overlap in this smaller set of potential losers was still only 28%. Not as much of an improvement as you'd hope for. . .
And now the whipped topping and chocolate sprinkles: recall that many of the reviewers did more than one list. That means that they got to see that same group of 250 compounds more than once, in the context of different lists. How did the same people do when they saw the exact same compounds? They only rejected them about 50% of the time, it pains me to report.
It looks as if potential drug leads follow the same rule as Tolstoy's comment in Anna Karenina: Good compounds are all alike, while bad compounds are each bad in their own way. It seems that the Pharmacia reviewers didn't reject many good structures, but they let varying (and inconsistent) numbers of bad ones through (with no particular correlation to their industrial experience, I should add.) The possible reasons advance for this variation include personal bias, inattention (and I wouldn't minimize that factor, not in a list of 2000 compounds), and a general human inability to sort through large complex data sets.
And right at the end, the authors allude to a bigger problem: If this is how consistently our med-chem intuition works, how well does it serve us during drug development? In a research project, there are plenty of decisions to be made about what compounds to make, what structural series to emphasize and which ones to set aside. Just how bad at this are we, really? I'm afraid to find out.
+ TrackBacks (0) | Category: Drug Assays
March 1, 2005
How do we accumulate our piles of test compounds over here in the drug industry? Well, mostly, we make them ourselves. But we also buy collections of compounds. Some of them are from other companies that have gone under, and some of them are from outfits that do nothing but produce libraries of (putatively) interesting compounds. (You can buy some that they've already shared with other companies for a discount, or you can have a new one made up for you for a higher price.) That was a much hotter business ten years ago than it is now, but it's still going.
And we buy compounds from university labs. That's a little-known way (outside of chemistry, at any rate) that professors and their research groups earn some extra spending money. (Naturally enough, this practice also leads to some elbow-throwing between the research groups and the universities involved when each of them want a piece of the profits.) We generally pay a set price per compound, but you wouldn't want to buy every single thing that academia offers. Some of the stuff is quite interesting and useful, but many of the structures will become drug leads only when swine take to the skies.
I've helped evaluate lists of potential purchases before, and they're a mixed bag indeed. Once I looked over a collection from Leo Paquette's group at Ohio State. Now, he and his group did a lot of nice chemistry over the years, and there were a lot of useful compounds on the list. But there were also plenty of intermediates from his famous synthesis of dodecahedrane. Those represented a tremendous amount of effort from his students and post-docs, and were part of the history of organic synthesis.
And I didn't want us to buy them. For one thing, they didn't look much like drugs to me. "But what if they hit in our assays?" said one of my colleagues, trying to make the case that we should buy some. "That's what I'm worried about," I said. What indeed? If one of those structures turned out to be a wildly potent ligand for some protein target, what exactly were we going to be able to do about it? Follow the twenty-nine step synthesis to make more of it? No, in this case, I thought we were better off with nothing than with something we could never use. We passed.
+ TrackBacks (0) | Category: Drug Assays