About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: email@example.com
June 29, 2006
Well, although Sanofi-Aventis still hasn't been very forthcoming on their FDA problems, their CB-1 antagonist Accomplia (rimonabant) has now been approved in Europe, and is already on sale in the UK. (S-A still say that they expect it to make it through in the US by the end of the year).
This article from Bloomberg is an excellent summary of the situation. No new obesity drugs have been approved for almost ten years, and potential sales of a safe and effective one are almost impossible to estimate. But there's room to argue about how effective rimonabant is, and (as with any drug) there's always room to argue about safety. And that's particularly true in what some people are already calling the post-Vioxx era.
The article makes some of the same points that I've made here before: new therapies and new mechanisms have risks, and there is no way that we (the drug industry and the regulatory authorities) can get rid of them. We can test for the big ones and read the signs for the smaller ones, but if a new drug is going to taken by millions of people for long periods, things will happen that no one ever saw during the clinical trials.
I hope, for Sanofi's sake and everyone else's, that there's nothing weird lurking down in the statistical weeds this time. But you can be absolutely sure that the company is holding its breath. We all do. It doesn't help.
+ TrackBacks (0) | Category: Diabetes and Obesity
June 28, 2006
Some people don't seem to believe it, but you definitely can publish too many papers. The problem is, the surest ways of publishing a lot of papers are to not do many things that are interesting or unusual, and to break everything down into tiny pieces. The first requirement is there because unusual results take time - time for you to believe them in the first place, time to replicate them, time to try to explain and extend them. And no matter what, eventually they aren't unusual results any more - they may still be interesting or useful, but the surprise has worn off, as it has to. So there's no one, really, who publishes a lot of fascinating unexpected stuff all the time, because it just can't be done.
The second requirement is the least-publishable-unit problem, which many people succumb to. After all, if you've got some red-hot results, why wait for the full paper? Bang out a communication before someone scoops you. Trouble is, people bang out communications all the time on things where being scooped is the last thing that could ever happen. As I put it once, many of these things could only get scooped in the sense of someone cleaning up after their dog. Short papers don't stick in the memory very well, either. If the stuff never gets condensed and systematized - that is, written up in some full papers - it's hard to get enough attention for it even if there is something worthwhile stretched across the publication list. It would have had a bigger effect were it not so diluted.
Even famous scientists have fallen into this trap, and I would like to adduce the late H. C. Brown as a shining example. Who, during the 1970s and 80s, did not groan on seeing yet another paper from Professor Brown? Variation after variation on his boron reagents poured forth, each with slightly different characteristics and reactivity, later superseded by other variations in the endless series. And the thing is, there are a number of real advances in there - the man didn't get the Nobel for nothing. But there's an awful lot of work that has, to put it kindly, not stood the test of time. Not everything he and his group did was worth being published.
So if a Nobel laureate can tarnish his reputation by acting like his own printing press, imagine what it does for less famous authors. Be quiet, you feel like whispering to them, be quiet until you've got something to say. . .
+ TrackBacks (0) | Category: The Scientific Literature
June 27, 2006
When I was in graduate school, I had a law student as a neighbor for a while. We were both pretty quiet, and got along fine in our respective dinky efficiency apartments, but we couldn't help but notice some differences between our studies. The biggest one became clear around this time of the year: he left, and I stayed. I still remember the look of surprise on his face when I told him that we didn't have any time off.
Well, I know that law students don't generally go off and laze around on the ol' hammock during the summer, but they at least get to go somewhere else for a while. But grad students just keep banging away, and if they're in the sciences, they're up there nights, weekends, and holidays.
Ah, those holidays. I still have in my files a memo from my old chemistry department, reminding everyone that the university (undergraduate) vacation schedule most definitely did not apply to us. Do not attempt to take these holidays was the very pointed message, because we will notice if you do. I sure didn't. I did take off some time at Thanksgiving and Christmas, and I didn't work every single July 4th, but otherwise it was a rare, rare day when I wasn't in the lab.
I've written before about my physical surroundings during that time, but things like this memo didn't add to the festive atmosphere very much, either. On the university level, it became clear pretty quickly that we weren't students, and we weren't staff - well, not all the time, anyway. We were whichever caused the least expense and inconvenience to the school at the time you asked the question.
But the biggest factor was the work. It was a strain. I like variety up here in my head, and this was the first time I'd ever had to do the same thing, think about the same thing, day after day (and night after night). It brought on, eventually, the mental equivalent of a leg cramp - I know for sure that I was in a much crabbier mood during my grad school years than I was afterwards, and I'm sure that it was largely because I was venting off some of the pressure. My project had the usual twists and turns, which during one point just about had me tearing my hair in frustration, but the real problem was that there was no escape from it.
Every hour I spend doing something else, besides necessities like eating and laundry, I found myself thinking about how I'd just added another hour to my graduate studies. When I could have, you know, been back in the lab trying to find a way out of the place. That is to say, doing something useful with my time. In the end, anything that didn't directly involve getting out was classified as a luxury, and I tried to ration such things. I remember going past a TV set at one point that had a golf tournament on, and I found myself amazed at these people - not the golfers, the spectators, these people who felt free enough to just wander off and spend the whole day in a sunny park watching a sporting event, without having to worry the whole time about what it was taking them away from.
All of which is why I reiterate my advice to my grad-school readers, who may be watching the summer weather out the windows of their labs (if they have windows, that is): get out. As far as it's compatible with taking a reasonably honorable and complete degree and not leaving any bad blood behind you, get out. The whole point of graduate school is proving that you can make it out of there.
+ TrackBacks (0) | Category: Graduate School
June 26, 2006
The latest round in the fit-to-never-end saga of the Vioxx APPROVe trial and the New England Journal of Medicine is here. The journal today released a correction of the orginal paper, perspective article on the statistics of the original study, and some inconclusive correspondence about the (recalculated) risks.
The correction is notable for removing the earlier statements that it appears to take 18 months for risk to develop in the study's Vioxx patient group. And since Merck's made a big deal out of that timing, this has already become the headline story. (I can recommend this overview by Matthew Herper at Forbes).
The perspective article, by Stephen Lagakos of Harvard, may be fairly heavy going for someone who doesn't who isn't statistically inclined. I include in that group - please correct me if I'm wrong here - the great majority of newspaper reporters who might be covering the issue (Herper and a few others excepted). I'm no statistician myself, but I spend more time with the subject than most people do, so I'll extract some highlights from Lagakos's piece.
He has a useful figure where he looks at the two incidence curves for the Vioxx and placebo groups. These are the curves that have been the source of so much controversy: whether or not there was an increased risk after 18 months of Vioxx therapy or not, or if the risk was clear from the outset, and so on. As Lagakos points out, in a slap at Merck's public treatment of the graphs:
"It may then be of interest to assess how the cumulative incidence curves might plausibly differ over time. Doing so by means of post hoc analyses based on visual inspection of the shapes of the Kaplan-Meier curves for the treatment groups can be misleading and should be avoided. A better approach is to create a confidence band for the difference between the cumulative incidence curves in the treatment and placebo groups - that is, for the excess risk in the treatment group."
He does just that, at the 95% confidence level. What it shows is that well past the disputed 18-month point, the 95% confidence band still contains the 0% difference line, and there's room around it on both sides. As he summarizes it:
"The graph shows that there are many plausible differences, including a separation of the curves at times both before and after 18 months, and a consistently higher or lower cumulative incidence in the rofecoxib group, relative to the placebo group, before 18 months."
In other words, the data don't really add much support to anyone's definitive statements about Vioxx risks before 18 months. The 95% band only widens out to a plus or minus 1% difference in cumulative incidence rates at a time between 18 and 24 months. At that point, the upper and lower bounds are both creeping up, though, but the band only rises to an all-positive difference between the two groups at the 30-month mark. By the 36-month point, the last in the study, the 95% confidence band is between a 1% and a 4.5% risk difference for Vioxx therapy compared to placebo.
This doesn't help Merck - in fact, since they've made such a lot of noise about this 18-month threshold, it does them quite a bit of damage. But it doesn't directly help the plaintiffs who are suing them, either - the good news for them is that Merck is looking bad again.
Lagakos goes on to talk about what these demonstrated long-term risks can tell us about short-term ones. Assuming that the risk for, say, 12 months of Vioxx is somewhere between the placebo group and the 36-month figure (a reasonable assumption), these figures will set the upper and lower bounds. The most optimistic outcome, then, is that 12 months of Vioxx does nothing to you at all, compared to placebo, even after another two years of observation. And the most pessimistic outcome is that the Vioxx you took continues to increase your risk the same as if you'd been taking it the whole three years (a damage-is-already-done scenario). Although Lagakos doesn't name these as such, you could call these two boundries the Merck line and the Trial Lawyer line, because they correspond to what each side would fervently like to believe is true.
Combining this with his 95% confidence band plot, you end up with a figure that shows that, within 95% confidence, the excess risk for a 12-month treatment could still range anywhere from zero up to the worst that was seen in the full-term-treatment group. So, because this range still includes the no-effect outcome, you can't conclude that a shorter course of Vioxx was harmful. But because it includes the data of the out-to-three-year group, you can't conclude it's safe, either. And that's really the best you can do. If you're not willing to make those starting assumptions, you can't really say anything about the shorter courses of treatment at all.
This is, I think, a valid way of looking at the controversy, but in the end, it's not going to satisfy anyone. It makes me think that both Merck and the lawyers going after them will either: (a) pick their favorite sections from this article and beat each other with them like pig bladders, or (b) ignore it completely. (I think that the first one is already happening, with the advantage, for now, to the lawyers). If Merck can make a successful counterattack that the data don't show that Vioxx was harmful for shorter doses, either, perhaps they can get something out of this. That depends, of course, on people believing a single word that they say. Which they're making more difficult all the time.
+ TrackBacks (0) | Category: Cardiovascular Disease | Clinical Trials | Toxicology
June 25, 2006
It's been an awful time to hold the stock of a small company called Neurocrine. They've been developing an insomnia therapy (Indiplon), and things seemed to be going along reasonably well. It's a crowded field. The compound is in the same GABA mechanistic class as the existing drug Ambien (which makes it in the same class as Lunesta, naturally, since that's one of those "Sepracor special" follow-on compounds). But Neurocrine signed up with Pfizer, who weren't getting any of the insomnia market and were willing to give several formulations of the compound their well-known marketing push, including a long-acting one that might have provided an advantage over the competition.
Then a few weeks ago, they got the dread "approvable" letter for the lower doses, and a flat-out "not approvable" for the long-acting formulation. Neurocrine's shares really took a beating as investors wondered if this was an approvable letter of the "new clinical trials needed" variety, and since then, it's become clear that that's exactly what it was. Late last week, Pfizer announced that it had had enough and pulled out of the deal. Neurocrine's stock has, in the space of a bit more than a month, gone from the mid-50s to single digits.
Neurocrine's putting a brave face up, saying that they're going to go ahead and develop Inidiplon themselves (and presumably look for another partner as they do so). It's going to be tough, but they may feel as if they have no choice but to try to get the drug through to the market. They have a few things in Phase II, but Inidiplon was going to be what paid the bills. Another partner will be essential, because they're going to have to come up with a lot of cash to get the drug through to a point where it could be approved, and to then try to market the drug themselves might be suicidal. By that time, they'd be up against generic Ambien, among other things. I don't see them doing that on their own.
No, I think that their choice is this: either ditch Inidiplon completely, contract back to a smaller development company with some stuff in the clinic, and hope for the best - or - cut back on all those other projects and put all the money on Inidiplon, trying to clean it up enough to attract another partner. The problem is, the ideal partner would be someone with a big pile of cash, a need for something to fill their pipeline, and a powerful marketing arm to deal with the competition. Someone, in other words, like Pfizer. Who's just told them that they want no part of it.
+ TrackBacks (0) | Category: Business and Markets | The Central Nervous System
June 22, 2006
I had a promotional e-mail from the scientific publishing giant Elsevier the other day. The latest calculated impact factors for journals have been released, so it's time, naturally, for them to brag about how things are going.
The message goes on, in large type, about "Consistently Increasing Impact Factors!". I guess that it's nice to know, for example, that Bioorganic and Medicinal Chemistry Letters has moved up from 2.333 to 2.478. Hey, at that rate, they'll be over 3.0 by the 2010s! O, brave new world that has such journals in it! Imagine being able to unload your failed med-chem projects in a journal with such an impressive impact factor - I'd encourage you to start making plans on how you'll spend the bonus and promotion money.
Even by these (debased) standards, some of the hype seems a bit. . .forced. Bioorganic Chemistry, for example, is touted as moving up from 1.240 to 1.565 (translation: unimpressive to unimpressive, even if you believe in impac factors). Those numbers make me think that I still have several years of lead time before I'm strongly motivated to look at the journal.
But my favorite blurb is this one: "Heterocycles: WAS 1.064. NOW 1.070". Well, all right, then, spread the news! The impact factor for Heterocycles has moved up in the third decimal place! What, did three more people cite papers from it in 2005? Look, Elsevier knows the truth as well as anyone else: Heterocycles is just not a good journal. But then, it never has been. Back in the 1970s and 80s, it came directly from Japan on expensive glossy paper stock, which along with the sleek black cover made a distinctly weird impression on those infrequent occasions when you actually looked inside a copy. The paper completely outclassed that stuff that was printed on it. Most of the articles could have been titled "Not Particularly Surprising Rearrangements of Bicyclic Imidazo Compounds That No One Cares About". I have seen no evidence that makes me think that the situation has improved.
Actually, my favorite part of the e-mail is what it doesn't mention. You know, when you think about it, Elsevier publishes some other chemistry journals, too. . .where, for instance, is Tetrahedron Letters? You don't suppose they'd miss an opportunity to highlight that one, do you, assuming that there was anything to highlight?
+ TrackBacks (0) | Category: The Scientific Literature
June 21, 2006
Here's a question for my readers in the research community: what assay have you dealt that turned out to be the biggest waste of time and effort? I can think of several strong nominees, but I'll lead off with one from quite a while ago.
This one happened in an antiviral group, and I believe that they were targeting a viral protease. Several chemists started cranking away on the lead compound, turning out analogs for the primary assay. But there was no decent SAR trend to latch on to. Things would look (briefly) sensible, then fall apart again, and there was only a scatter when you tried to correlate things with the secondary assay.
After some three or four months, the reason for all this became clear (it doesn't always, I have to note). Turns out, as it was told to me, that a biologist on the project had everything tested against the wrong enzyme. Who knows what it was, but it sure wasn't the protease of interest. What's more, he had apparently realized early on that it wasn't the right stuff, and was frantically working in the background trying to get the right stuff running. It never worked out. He ended up generated week after week of meaningless data, hoping that the project would go away. Instead, as it turned out, he went away (and not by choice).
So that's my entry. No doubt horrors will quickly emerge to beat it.
+ TrackBacks (0) | Category: Drug Assays
June 20, 2006
A comment to the last post asked a good question, one that occurs to everyone in the drug industry early in their career: how many useful drugs do we lose due to falsely alarming toxicity results in animals?
The answer is, naturally, that we don't know, and we can't. Not in the world as we know it, anyway. The only way to really find out would be to give compounds to humans that have shown major problems in rats and dogs, and that's just not going to happen. It's unethical, it's dangerous, and even if you didn't care about such things, the lawyers would find some thing you did care about and go after it.
But how often does this possibility come up? Well, all the time, actually. I don't think that the industry's failure rates are well appreciated by the general public. The 1990s showed that about one in ten compounds that entered Phase I made it through to the market, which is certainly awful enough. But rats and dogs kill compounds before they even get to Phase I, and the failure rate of initiated projects making it to the clinic at all is much higher.
So it's not like we take all these rat-killers on to humans, despite what the lunatic fringe of the pharma-bashers might think. Nope, these are the safe ones that go on to cause all the trouble. "Oh, but are they?" comes the question. "How do you know that your animal results aren't full of false green lights, too?" That's a worrisome question, but there are a lot of good reasons to think that the things we get rid of are mostly trouble. For all the metabolic and physiological differences between rodents, dogs, and humans, there are even more important similarities. The odds are that most things that will sicken one of those animals are going to land on a homologous pathway in humans. And the more basic and important the pathway is, the greater the chance (for the most part) that the similarities will be still be strong enough to cause an overlap.
But there are exceptions in both directions. We know for a fact that there are compound that are more toxic to various animal species than they are to humans, and vice versa. But we play the odds, because we have no choice. Whenever a compound passes animal tox, we hope that it won't be one of the rare ones that's worse in humans. But when a compound fails in the animals, there's simply no point in wondering if it might be OK if it were taken on. Because it won't be.
+ TrackBacks (0) | Category: Animal Testing | Clinical Trials | Toxicology
June 19, 2006
So, you're developing a drug candidate. You've settled on what looks like a good compound - it has the activity you want in your mouse model of the disease, it's not too hard to make, and it's not toxic. Everything looks fine. Except. . .one slight problem. Although the compound has good blood levels in the mouse and in the dog, in rats it's terrible. For some reason, it just doesn't get up there. Probably some foul metabolic pathway peculiar to rats (whose innards are adapted, after all, for dealing with every kind of garbage that comes along). So, is this a problem?
Well, yes, unfortunately it is. Rats are the most beloved animal of most toxicologists, you see. (Take a look at the tables in this survey, and note how highly the category "rodent toxicology" always places). More compounds have gone through rat tox than any other species, so there's a large body of experience out there. And the toxicologists just hate to go without it. Now, a lot of compounds have been in mice, for sure, but they just aren't enough of a replacement. The two rodent species don't line up as well as you'd think. And there's no other small animal with the relevency and track record of the noble rat. (People outside the field are sometimes surprised to learn that guinea pigs aren't even close - they get used in cardiovascular work, but that's about it).
So if your compound is a loser in the rat, you have a problem. You can pitch to go straight into larger animals, but that's going to be a harder sell without rat data. If your project is a hot one, with lots of expectations, you'll probably tiptoe into dog tox. But if it's a borderline one, having the rats drop out on you can kill the whole thing off. They use up a lot of compound compared to the mouse, they're more likely to bite your hand, and they're an order of magnitude less sightly. But respect the rat nonetheless.
+ TrackBacks (0) | Category: Animal Testing | Toxicology
June 18, 2006
Well, my last post on biological systems and their ingredients really touched a nerve (see here for an example). I guess I should, um, clarify my position before the leaky bottles of beta-mecaptoethanol start arriving by FedEx.
I already knew the reasons for several of the components I spoke about - EDTA, for example. And I realize that there's a reason for everything that's in there. But what throws me as a chemist is that some of these recipes seem to be handed on "just because they work" Does a particular enzyme prep need EDTA in it or not? Many times, no one checks, because it probably won't harm things and it's better to be on the safe side, so in it goes. It may be hard for a biologist to understand how odd that feels to a synthetic organic chemist, but I can tell you for sure that it does.
One of the commentors to the last post brought up an important point: biologists optimize for the function of a system. And that often means having a lot of buffers, chelators, cofactors, adjuvants, reducing agents, and chaperones floating around in there with your protein of interest, to keep it thinking that it's still in some kind of cellular environment, thus putting it in the mood to do what it's supposed to be doing. There's no point in trying to see how minimal you can make the system if it's working the way you want it to already.
But we chemists are minimalists. We optimize for the function of a system, too, but in our case, purity is usually a good first variable to tune up. The cleaner everything is in our reactions, the better it generally works. That means pure, distilled solvents, with no water in them. It means an inert gas atmosphere, so there's no reactive oxygen around. And it means that your starting materials and reagents should be as clean as you can practically get them, because when there's two percent of this or five percent of that in the flask, things often start to go wrong in unpredictable ways. When a reaction wipes out on us, the first thing we check is whether everything was clean enough.
So you can imagine how biology looks to an organic chemist, whose ideal reaction is a clear solution in a clear glass flask, with one pure solvent and two pure reactants cleanly converting to only one product. Biological systems, to us, look like trying to do science by adding squirts of barbecue sauce to bowls of beef stew. Of course, as the biologists know, the stuff in those bowls was derived from stew (and worse), and was born to the stuff. It won't work unless things achieve a certain level of stewiness, and the surest way to kill it would be to turn an organic chemist loose on it to clean it all up.
+ TrackBacks (0) | Category: Drug Assays
June 15, 2006
You know, I mean no offense to all my pharmacologist friends and readers, but. . .do y'all really know why all those things are in your buffers and solutions? I've been wrestling with this the last few days, trying to straighten out my "vial thirty-three" problem, and it's been interesting.
There's some reducing agent in there, naturally. Can't have those thiols turning into disulfides and balling up the protein, I understand - but does something bad happen if it's not in there? Generally, no one finds out, because, hey, why mess with it? And there's some EDTA, and some salt, and their function is? Well, as far as I can tell, they're also in there because they've sort of always been. Same goes for the squirt of detergent (Brij-35 or some such), and the tiny bit of bovine serum albumin, of all things. It's just part of the old-fashioned recipe from Grandma's Protein Kitchen.
Now, organic chemistry has a little of this, true, but it hasn't reached quite the Ancient Runestone levels of enzymology. We like to use tetrahydrofuran (THF) for a lot of organometallic reactions, for example, but at least we know that that's because THF is a good co-ordinator to metal cations. At least we don't have six other trace constituents in there that we always use whether we need 'em or not. Another example is how we tend to stick to good ol' ethyl acetate and hexane to run TLC plates, rather than look into other solvent combinations that might do a better job - probably because there are just too many of them to investigate, and EtOAc/hexane works well enough.
And that, I think, is the problem that the biologists face. Biochemical systems are tricky. They have way too many variables, which means that their degrees of freedom have to be reduced just to get anything to work. So all sorts of recipes and rules of thumb are handed down. Not all of them are optimal, but they're mostly decent and will allow you to get on with the project without wasting too much time. Especially in the early part of a project, an immediate 70% effectiveness is worth a lot more than a 98% that would take you a month of work to tweak up to.
+ TrackBacks (0) | Category: Drug Assays
June 14, 2006
When I meet people with no particular scientific background and they find out what I do for a living, it seems that there are several things that they're usually surprised about. For one thing, many people seem to think that doctors discover new drugs. Some of them don't even think about the drug companies or their role - and if they do, they imagine a lot of doctors working there. Actually, as my readers in the industry can confirm, the only time that physicians really get involved is when the drug is headed into the clinic and dosing in humans. There's not an M.D. in sight while we're validating drug targets, screening compounds, and working to fix their selectivity and activity. (And there's that noisy subset that think that all drugs are discovered in NIH-funded academic labs, but we'll leave that one alone for now).
Another surprise is when people find out that I've been doing this since 1989 without getting any drug on the market. I think that some folks are just being polite when I tell that that this isn't unusual, thinking to themselves that I must be some kind of hack. But the general public has, as far as I've been able to see, a very exaggerated idea of how quick and easy it is to find a drug. When I say that if I found a wonderful new compound tomorrow that it might be on the market in about 2015, they think I'm delusional. I wish I were.
There are others. I've met people who didn't realize that patents ran out eventually, that we don't find all our drugs by computer modeling, and that we always have to run clinical trials before we can sell something new. I have to think that the industry would be in better shape if people understood what drug discovery is like. I appreciate that various ads that companies have run over the years, but it's clear that most people mentally tune them out immediately. What's unclear is how this could be fixed, because I don't see how more advertising is going to do the trick.
+ TrackBacks (0) | Category: Why Everyone Loves Us
June 13, 2006
Update: The deal is done! A last-minute pile of money made all the difference, as it so often does.
The Bayer-Schering AG-Merck KGaA brawl I wrote about the other day has increased in complexity and craziness. According to the Financial Times, the latest figures appear to show Bayer with a slightly smaller stake in Schering than it had before, which (the paper speculates) could mean that some shareholders have withdrawn their share-tender offer to Bayer and sold them to Merck instead.
Meanwhile, according to Reuters, there's going to be a Bayer ad in the Boersen Zeitung tomorrow that quotes an even lower figure than the FT has, with a breakdown that seems to show that previously tendered shares have indeed been withdrawn. The Reuters figures on Merck's ownership, though, make it seem as if these shares haven't necessarily been sold to Merck, but are just being held for a better offer. I didn't realize that you could do that, but I think everyone in this fight is learning things that they didn't know before.
Meanwhile, Bloomberg is saying that Bayer has filed suit in the US, claiming that Merck has violated securities law. That story also says that Deutsche Bank, who advised Merck in their original bid for Schering, is still with them in this fight. Perhaps they know what's going on, because no one else seems to.
I include the BBC in that list, because while they also have thelawsuit story, as of Tuesday night EST they seem to be under the impression that the US Merck is involved, rather than Merck-Darmstadt. Things are confusing enough already, thanks.
The deadline for the deal is midnight Wednesday, although I can't seem to find out whether that's Central European time, EST, or what. And it looks like that's going to be an important detail. Keep an eye on your favorite financial news sources to see who wins the German Pharmaceutical World Cup Final!
+ TrackBacks (0) | Category: Business and Markets
I've written before about how seemingly obvioius combination therapies can be hard to develop. Last Friday we had some more evidence of that.
Pozen, partnering with GlaxoSmithKline, has been trying to put together two well-established drugs for migraine: GSK's Imitrex (sumitriptan) and the OTC pain reliever naproxyn. But the FDA sent them the dreaded "approvable" letter, requesting more information (which, who knows, might require more studies) and Pozen's stock took the plunge. (Another small company, Neurocrine, went through a similar wringer a couple of weeks ago - I'm backed up on writing about that, but I hope to soon).
Analysts seem to be optimistic about the drug's eventual chances, but that doesn't do you as much good if you were holding the stock before the FDA letter. These "approvable" letters seem to have been increasing in frequency the last couple of years, and it's turning into a real problem. Such a letter either turns out to be not too big a deal, in which case a company's stock has been slaughtered for nothing, or it turns out to be such a big deal that you wonder why the company (and the agency) didn't come to some understanding about it before.
Is the FDA too risk-averse, or are companies trying to get too-thin NDAs through? My money's on the first explanation, in most cases. I think that the whole COX-2 debacle has helped to put the agency into a better-safe-than-sorry mode. I understand the need for caution, but (here comes a general principle of life): just because you can mess things up in one direction doesn't mean that you can't mess them up in the opposite one. Saying that the agency is too cautious doesn't mean that I think that they should let everything through - but letting some things through would be nice.
(See this Business Week piece for more on the approvable problem).
+ TrackBacks (0) | Category: Drug Development
June 11, 2006
As those who follow the drug industry know, Bayer has been making an offer to acquire Schering AG of Berlin. This was after Merck KGaA (Darmstadt) had made a bid, which Bayer topped. They're trying to get 75% of the shares pledged, but it's been a hard slog. Many large investors seem to be hanging back, hoping for a bigger payout later on (as has happened in some other European takeover deals), and Bayer ended up extending the deadline to tender shares.
But on Friday of last week, Merck suddenly upped periscope and fired bow tubes. They filed paperwork with the SEC that they'd exceeded the 10% regulatory threshold of ownership of. . .Schering AG. They'd been out quietly buying the shares on the open market, apparently, and their statement said something about "protecting their investment".
The best German word for the reaction to this news is bestuerzung, translated into English as "consternation" or "dismayed confusion". Bayer issued a statement calling Merck's actions "incomprehensible", and pointing out that Merck was paying a price per share that they'd already dismissed as too high. So it'll be interesting to hear everyone's reaction to the latest news: Merck now says that they have 18.6% of Schering, and Bayer says that they're up to 61.5%.
It's fair to say that things like this don't usually happen in the German industrial world, so no one really knows how to deal with this one. Bayer's latest deadline is Wednesday the 14th, and if those figures are accurate, only about 20% of Schering AG shares are unaccounted for. Might as well start the popcorn popper and open a cold drink, and sit back to watch the show.
+ TrackBacks (0) | Category: Business and Markets
Friday afternoon I got my results for the repeated experiment I spoke about here. Unfortunately, it matches the second run (the one that looks like garbage) rather than the first one (which looked like something wonderful). Unless I can think of some reason why that first run was different than the other two - and I was trying to make all three exactly the same - this forces me to conclude that the first experiment was some sort of false-positive artifact.
That's particularly hard to take because it looked so believable. The few colleagues I showed the initial data to were impressed by how clean it looked. And it made chemical sense as well, but that's all very close to being beside the point. I'm just glad that I didn't run up and down the halls showing it off, but I've been doing science too long to do that with n-of-one data, and this is a good illustration of the point.
So, what next? Well, when I set up that first experiment, I also ran another one on a different system, which has been in the freezer since then. It appears that freezing these experiments doesn't hurt them, so I'm going to try to thaw those out and get them analyzed. And none of this affects the positive results I spoke about here, on the model system. Those actually have repeated and made it past the control experiments. I've got an extension of that work coming up as well.
What I may have to do is fall back to the model work and beat on it some more. It looks like I need to see if I can understand more of what's happening before I try these gold-medal real world experiments again. I still have something - just not as much as I thought I did a couple of weeks ago.
+ TrackBacks (0) | Category: Birth of an Idea
June 8, 2006
There's an article in Wednesday's Wall Street Journal (subscriber-only link here) (Update: also available freely here - thanks to Kyle of The Chemblog for finding this) on Merck's head of research, Peter Kim. It's well-written, in the sense that depending on how you come to it, you could come away with very different conclusions. If you're a fan of Kim and his approach since he took his current job, then you may well see a portrait of a driven, hard-working scientist struggling to change an insular, arrogant research culture and drag it into the real world. But if you're not so sure about Kim's managerial virtues, you can find evidence that he's in well over his head.
As the article notes, one of the big changes he's made is the number of deals that Merck has been signing. To be fair, the company was probably going to pick up the pace on outside collaborations anyway when its late-stage pipeline took so many hits, but maybe not to this extent. Much is made of a "charm school" operation where Merck's people were supposedly told not to be so haughty with potential small-company partners. I find it hard to imagine that this made a huge difference, though. Merck most certainly does have an attitude, even now, but I have to think that small company pitchmen are used to getting the same stuff everywhere they go.
Everyone knows the score at these presentations. The people from the smaller outfit are saying "We have something that you don't. Even though you're big and have more money than we do, believe us, you want this." And their counterparts on the other side of the table are saying "Prove it. We know that you think we're a big piggy bank to be turned over and shaken, but no nickels are coming out until you show us something more than snappy PowerPoints". The glad-handing approach that the article portrays Kim as using sounds to me like a recipe for overpaying for deals.
But my favorite part is on the various departures that have taken place:
Soon after he arrived, he angered Emilio Emini, Merck's senior vice president of vaccine research. During his 20 years at the company, Dr. Emini had done some seminal AIDS work. Dr. Kim wanted to hire another accomplished but controversial AIDS researcher, David Ho, to oversee him. Dr. Emini strongly objected. . .(and) left Merck in early 2004. He now works for rival Wyeth. . .
Vetern Merck research managers such as Kathrin Jansen, who was instrumental in the devleopment of (cervical cancer vaccine) Gardasil, and Scott Reines, a top researcher in psychiatric diseases, also took jobs at other pharmaceutical companies. . .Dr. Kim hired other academic scientists who enjoyed good reputations but, like hiim, had never developed a drug. . ."
Not having developed a drug is no particular shame - all of us in the industry start out never having done that. The thing is, we also start out knowing that everyone else in the place knows more than we do about it. High-level academia transplants have a poor track record in the drug industry - if you'd like some more evidence, you can ask some people with a few years of experience at Bristol-Meyers Squibb. Kim is probably correct when he says that Merck had too much of a "That's not how we do things here" attitude, but people sometimes forget that academia has no immunity to that disease, either.
Update: I also recommend checking out the take at Health Care Renewal, from an ex-Merck employee.
+ TrackBacks (0) | Category: Academia (vs. Industry) | Drug Industry History
June 7, 2006
Most of the things we use in an organic chemistry lab can sit around for reasonable periods of time. I've used reagents from bottles that are older than I am (OK, this was twenty years ago, so it's getting a bit harder to do). As long as the stuff isn't air- or moisture-sensitive, it can hang around a long time. That lets out the violently reactive things - don't expect to find the same piece of potassium metal you left if you're silly enough to leave it out while you answer the phone, for example. (In fact, you'd better pick up a fire extinguisher on the way back, just on general principles).
But there are some reagents that don't react with air, but rather react with themselves, which can make them particularly hard to handle. If the reaction is exothermic, things can get dangerous. The heat given off by the first bit that reacts tends to set off some others, which really gets a good amount going, and ba-doom. Even if you're not in the ba-doom category, there are some things that you need to look out for. Styrene, for example, is always sold with some free-radical inhibitors in it, because if it gets a chance for a radical chain reaction to start, the whole bottle will seize up into a warm gunky block of polystyrene. (That's for small bottles - larger ones won't be able to transfer their heat so well to the surface of the container and can make a much bigger mess).
Benzaldehyde isn't so violent, but it slowly forms
a six-membered-ring trimer on standing. Update: I've been carrying this idea around for twenty-five years now, but it's wrong. The nonaromatic aldehydes love to trimerize, but benzaldehyde and the other aromatic ones don't. The solid gunk is benzoic acid, from air oxidation, which is a separate category of How Reagents Go Bad. Old bottles can have some crusty crystals of the stuff around the neck of the bottle. The reagent is one of those things that you really have to distill before using it if you want to trust your results. Update: This point is definitely still true!
The extreme case in the self-condensation category is probably cyclopentadiene. It does a Diels-Alder reaction with itself first chance it gets, so it's always sold as the dimer. If you want the pure monomer, you have to distill for it. The dimer cracks thermally, so the vapor condensing at the top of the still is a different substance than the stuff down in the pot. Collect it, keep it on ice, and use it - the diene's a ware that will not keep.
+ TrackBacks (0) | Category: Life in the Drug Labs
June 6, 2006
Well, I got my repeat experiment set up before leaving work today. I could think of one variable I hadn't controlled for head-to-head yet, so I set up an extra couple of vials for that one. I'll try to get them analyzed tomorrow afternoon or Thursday morning, depending on how busy they are downstairs.
Getting this experiment going was a different feeling than when I ran it that Saturday. I was very eager and nervous that day, because I'd just had potentially great results and was ready to verify them as quickly as I could. (I had no way of knowing that the instrument needed for that was going to be out of service for two weeks, naturally). Today's repeat had some nervousness to it, but it was more along the lines of dread than the earlier anticipation.
I'm worried now that what I saw the first time is some kind of artifact, caused by something I haven't been able to anticipate. It looked very orderly, very clean, and quite believable, in its spectacular way. But yesterday's data had a more familiar look to it. It's really quite rare to get experimental results that are totally unequivocal - so many of them are a mixed, partly inexplicable bag. "Tell me - yes or no!" the experimentalist shouts, and the reply comes back "Dunno. . .sort of. . .I think. . .but maybe not, y'know?"
So by those standards, the first experiment, clean though it looked, is the suspicious anomaly. Here's hoping I'm wrong about being wrong.
+ TrackBacks (0) | Category: Birth of an Idea
I finally have some more news about the experiments I spoke about here. The instrument used to analyze them broke down completely - not my fault, they tell me, but perhaps they're being kind - and came back on line just in the last couple of days. Yesterday we took the samples out of the freezer, where they'd been living for two long weeks, and ran them late Monday afternoon.
And the data make no sense to me at all. For example, some of the vials that were designed to shut down the effect I'm seeing actually made far and away more product than anything else. That's so odd that it doesn't even make sense as a negative result, which would have had those vials acting the same as the others. I can't come up with any reason why they'd be the best in the lot, that's for sure. There's also a lot of scatter between some of the duplicate runs, which leads me to
hope think that this wasn't a failed experiment as much as a bungled one. Whether I hosed it up while working on it that Saturday, or whether it didn't take well to sitting in the freezer, I don't know.
Of course, it could be that these ugly figures are the real results, which would fit with Nature's well-earned reputation for heartlessness. There's only one way to find out - I'm setting everything up again today, with a few more variations to address any of the other variables I can think of. My first data still look so clean that I'd hate to think that this latest junk is the real face of things. But we'll know soon. I'll keep everyone informed.
+ TrackBacks (0) | Category: Birth of an Idea
June 5, 2006
There's an interesting scandal brewing in synthetic organic chemistry - well, actually, more than one, but I haven't covered the Sames matter at all. This is a new one. Back in February, Angewandte Chemie, one of the most prestigious outlets for organic synthesis we have, published online a paper by James J. La Clair on the total synthesis of a nasty molecule called hexacyclinol, originally isolated from a Siberian fungus.
The paper is remarkable in several ways, and not just because I'd never heard of La Clair. The synthesis is over 30 steps long, which is unfortunately not as uncommon as it should be. (I'm afraid that my bias against total synthesis is showing). But La Clair is the only author, which is highly unusual for such a large effort. And it must have been a large one, since the paper makes reference to starting on a molar scale and finished with over three grams of the penultimate intermediate. Experienced organic chemists will wonder if two or three decimal points have been misplaced there, but that's what it says.
Here's a paragraph for my fellow synthetic geeks - everyone else can skip ahead. When you read it closely, this synthesis has some pretty odd steps in it. One oxidation (aldehyde to acid in the presence of a dithiane) is accomplished through the slow addition of silver oxide in paraffin wax, of all things. If that's a reagent combination that's ever appeared in the literature, I've missed it. Silver oxide, sure - but not delivered by a cheese grater. There's a Mitsunobu inversion, via thiophenol, which occurs on a brutally hindered tertiary alcohol, which is certainly not something I'd expect to happen, or count on midway through a thirty-odd step route. A bit later, La Clair has a mesylation that's accomplished by adding methanesulfonyl chloride/triethylamine once an hour for five hours, which is sort of believable, as the kind of thing that you're driven to by frantic experimentation, but still a bit odd-sounding.
As mentioned, La Clair is the sole author, with an address given at the Xenobe Research Institute. The usual reaction to that statement is "The what?", as I've found empirically by wandering down my hallway at work. (Or, as Stiles puts it, "not to be confused with the Scientology outpost in low orbit around Mars") Xenobe's site is a bit odd, giving off the distinctive feel of a one-man operation. I particularly like what happens when you click the "Support" button and are informed that the Institute is not accepting donations at this time. Before Xenobe, La Clair was at Bionic Bros. GmbH, in Berlin, which sounds unavoidably like a firm from a William Gibson novel. This is where much of the synthesis was done, according to a footnote in which he acknowledges, glancingly, "the assistance of five technicians". (In his defense, that's very much the German style of chemistry, for better or worse).
Now we get to the brow-furrowing part. In the preprint section of the ACS journal Organic Letters, Scott Rychnovsky of Cal-Irvine unveils a computational technique for predicting the carbon-13 NMR spectra of complex structures. His test case is. . .hexacyclinol, La Clair's baby. But according to Rychnovsky, the published structure for the natural product has to be wrong. His method seems to work quite well on similar polycyclic terpenoid nightmare structures, but feeding the accepted hexacyclinol structure into it yields a terrible correlation.
So what's the correct structure? Rychnovsky points out that a related species of fungus has been shown to produce another natural product, panepophenanthrin. If that reacted with some methanol and a bit of acid, which might easily happen during the isolation procedure, it would produce a compound with the same molecular weight as hexacyclinol. . .and that structure, run through the NMR predictor, gives a fit that's right in line with the other known cases he used. Rychnovsky's quite sure that his proposed structure is the real structure of hexacyclinol.
But if it is, how on earth did La Clair get the data he has? His paper includes a proton NMR of the natural product and one of his synthetic material for comparison. They're identical. But if Rychnovsky's right, La Clair synthesized the wrong structure entirely. The spectra shouldn't match at all - that's one of the remaining reasons for total synthesis, to make the compound and see if the spectral data really fit. Now, Rychnovsky's argument hinges on the carbon spectrum, but that should be easy to obtain, given the monstrously huge scale that La Clair seems to have been working on. And given the discrepency between the two proposed structures, I can't see how the proton NMRs can possibly line up by chance.
The strangest part of La Clair's paper is its final footnote, added in proof. Here's how it starts; make of this what you will: "The 1H NMR spectra for this Communication were determined by contract services. The spectra provided in the Supporting Information were collected by N. Voss (Berlin, Germany). The operator added the peak for CDCl3 to the spectrum of synthetic hexacyclinol (1), however, this was done incorrectly at 7.5 ppm and against the request of the author." That doesn't make a whole lot of sense. The NMR operator "added the peak" for solvent to a spectrum? Why? And he put the peak in at 7.5 ppm (the wrong place, for non-chemists)? With what, Photoshop? No, this is very strange indeed.
One of these guys is wrong. And reading Rychnovsky's paper, it's clear that he's not in much doubt about who it is: "Recently, a provocative synthesis of hexacyclinol was reported (footnote to La Clair's paper), and interest in the paper triggered my reexamination of the original structural assignment." By the standards of organic chemistry, that's a gloved slap in the face in the public square. Someone at Angewandte Chemie should probably be feeling the sting, too.
Thanks to Dylan Stiles for calling this business to my attention - his post's comments, which are much more potentially libelous than things tend to get around here, are well worth a read for those interested. Update: La Clair has made an appearance in Dylan's comments, rather to everyone's surprise, I'd say. Still no word on a C-13 spectrum, though.
+ TrackBacks (0) | Category: Chemical News | The Scientific Literature
June 4, 2006
Over the last few years, there's been more attention paid to a problem in cancer therapy that is going to keep us all very busy: drug resistance. Everyone's heard about this topic in reference to antibiotics, and with good reason. But the same thing happens in oncology, which makes sense. Despite a lot of major differences, in both cases we're trying to kill off robust, fast-dividing cells that have a lot of genetic variation in them. Anything that doesn't respond to the drug is going to have an open field in front of it.
The situation in cancer might actually turn out to be worse than in antibiotics, disturbing though that sounds. For one thing, cancer cell lines are often rather genetically unstable, which may well be how they ended up becoming cancer cell lines in the first place. So mutants are pretty easy to come by. Counterbalancing that, they don't have a quick way of transferring genetic material to each other like bacteria do, which means that we don't have to restrict the use of the therapies like we have to with antibiotics. Each patient is an island, fortunately.
The real difficulty is that antibiotics are typically taken for a set course of treatment - you knock the infection down enough to where the patient's immune system can clean up the rest, and everything's done. But cancer therapies, the kind that we're turning out now, are likely going to be more like insulin is for diabetics - you're going to be taking them for a long time, quite possibly for the rest of your life, which gives plenty of time for something bad to happen. It's impossible to know whether all the cancer cells disappear, or whether they're just lying low. So no one's sure yet what will happen ifyou go off of the drugs, and as you can imagine, that's data which is going to be hard to obtain.
Gleevec (imatinib) is a good example. There are all too many patients who have taken the drug for longer periods and have seen it lose its effectiveness, which must be really a wrenching experience. The kinase that the drug targets (Bcr-Abl) turns out to have a number of mutant forms that are unaffected by Gleevec, so any cells that have (or develop) these variants are free to cut loose. Interestingly, it may be the case that Bcr-Abl itself sets up conditions inside the cell that favor development of mutations, which for cancer cells could be something of a survival tool.
The only way around such problems is to make new drugs, just like in the antibiotic field. Two of the most advanced ones are AMN107 (nilotinib) and BMS354825 (dasatinib). Dasatinib had a good ASCO meeting, with an FDA committee recommending its approval, and with new data being presented comparing it head to head with Gleevec. So far, it looks like it's superior to higher doses of Gleevec in CML patients who've started to show resistance, but this is all with blood markers (as opposed to real survival data, which naturally takes longer to come in). But so far, so good.
These might remain useful for longer, since their binding modes are somewhat different than Gleevec, and whole classes of mutant Bcr-Abl forms are still susceptible. But resistance will surely keep cropping up. We're going to be a this for a long time.
+ TrackBacks (0) | Category: Cancer
June 2, 2006
Razib over at Gene Expression dropped me a note about a petition titled "Conservatives Against Intelligent Design". I know that many of my readers don't necessarily share my political views (and this blog isn't explicitly political in nature, anyway). But anyone who'd like to help point out that many people who lean right actually think Intelligent Design is untestable and untenable can sign here.
We now return to the ASCO-centric world the blog will inhabit for the next few days (see the post below, and to come).
+ TrackBacks (0) | Category: Intelligent Design
June 1, 2006
Well, the American Society for Clinical Oncology (ASCO) meeting is almost upon us, and it's time for the annual blizzard of misinformation. I'm not talking about the presentations at the meeting, which are no better or worse than the usual scientific meeting. No, I mean the press releases and subsequent press reports, of which the Reuters item I'm going to highlight today is a depressingly good example.
The headline reads "Big Pharma Expected to Dominate Key Cancer Meeting", which isn't such a good start. Any time you see the industry being divided up into "Big Pharma" and "Biotech", as this piece does, an alarm bell should go off in your head. We need to get clear on what "biotech" means, or dump the term altogether. I'm in favor of the second choice, although that's not going to happen, because all the categories are mixed up, anyway. The tiny-DNA-and-protein versus big-chemical-drug storyline doesn't work so well these days. If Genentech and Amgen aren't Big Something, I'd like to know who is. And on the other end of the scale, was Sugen a "biotech" because they were small (even though they make small organic molecules instead of protein-based drugs?) How about Vertex, or OSI?
My favorite part of the article is this one:
Big pharma's interest in cancer comes about five years after Novartis' launch of the targeted leukemia drug Gleevec.
Gleevec was initially expected to be a niche product, but its effectiveness and benign side-effect profile led to sales last year of $2.2 billion.
Let's take those in order. "Big Pharma's interest in cancer" has, in fact, been pretty constant. It's our success that comes in fits and starts. The article would makes it seem as if we can turn on the clinical research tap at will - when we finally get around to it, anyway. But there are no sudden waves of interest that show up in clinical research meetings - you're seeing the end result of decisions taken eight or ten years ago. When do you think we started the projects that are now being presented at ASCO, anyway?
And as for Gleevec, which is a fine drug that does well by its small intended patient population, let me say (again) that I think that a good amount of it is being wasted. There are, to the best of my knowledge, not enough people with GIST or CML (the two cancers that it's been approved to treat) to account for its sales, not even nearly enough. Gleevec was indeed expected to be a niche product. In terms of the people it can effectively treat, it still is.
It's not for lack of trying. Here are a few attempts from just the last few months: endocrine tumors, renal cell carcinoma, metastatic melanoma, germ cell tumors, refractory myeloma, and advanced hepatocellular carcinoma. In some types of tumor, Gleevec may actually make things worse.
Again, I'm not going off on Gleevec because it's a bad drug,. It isn't. It's pretty typical of what we have to offer these days in cancer: very good effects in a small number of people, some help for a slightly larger number, and nothing much for most. Talk of a "benign side effect profile" is ridiculous for many of the newer agents, because they can only be considered benign with compared to the old ones, which were toxicologically the scourge of the earth. Compared to cisplatin, sure we look good. Who doesn't?
There were surely be more of this kind of thing over the next few days. My advice is to ignore the cancer news until things calm down a bit and we can get a better read on what's really happening. There's going to be too much dust in the air for that this weekend.
+ TrackBacks (0) | Category: Cancer