About this Author
College chemistry, 1983
The 2002 Model
After 10 years of blogging. . .
Derek Lowe, an Arkansan by birth, got his BA from Hendrix College and his PhD in organic chemistry from Duke before spending time in Germany on a Humboldt Fellowship on his post-doc. He's worked for several major pharmaceutical companies since 1989 on drug discovery projects against schizophrenia, Alzheimer's, diabetes, osteoporosis and other diseases.
To contact Derek email him directly: firstname.lastname@example.org
September 30, 2008
Today brings the news of which areas Pfizer has decided to bail out of: obesity, most cardiovascular (it seems), anemia, osteoporosis and some osteoarthritis, liver disease, and muscle. They're concentrating on oncology, pain, Alzheimer's, and diabetes, which the company seems to have identified as the best intersection of their pipeline and the associated profits.
This will probably fuel speculation that the company is Imclone's mystery bidder - that name will supposedly be revealed at midnight on Wednesday, if I'm reading these reports correctly. If so, that makes me want to groan and roll my eyes. I'm waiting for Carl Icahn to tell everyone that they'll have to say the secret password to find out.
That news item linked to above also mentions that Pfizer has shed ten thousand employees since January of last year. Yikes. And on that subject, I hear from several sources that GlaxoSmithKline is cutting preclinical development hard today. People seem to have known that it was coming today, and roughly how bad it would be, but today is supposedly the day that names are read off the list. Good luck to people there. The contractions continue.
There's no longer any doubt, in case anyone was wondering, that this is the worst stretch for research employment at the big pharmaceutical companies in at least twenty years (to my certain knowledge) and very likely much longer than that, from what longer-serving colleagues tell me. Frankly, I'm not sure we've ever seen anything quite like this - which makes further prediction impossible. . .
+ TrackBacks (0) | Category: Business and Markets
September 29, 2008
For the most part, the biologists on a drug discovery project expect us in the med-chem labs to be able to make pretty much anything we need to make. Actually, I don’t have to go that far – the other chemists more or less expect that, too. Chemistry’s a big field, with a lot of reactions and techniques, and if you want some particular structure badly enough, there are usually ways to get to it.
But not always, and not always by routes that you’re willing to put up with. That’s especially true early in a project when you need some robust chemistry to turn out a lot of diverse analogs quickly, so you can have some idea of which parts of the molecule are most important. Synthetic trouble at this stage is frustrating for everyone involved.
I was on a project a few years ago that ran into this exact problem. Compounding the pain was the way the lead compound looked when it was up on a screen during a meeting: small, perfectly reasonable, easy to deal with. Hah! It was a werewolf, that thing. None of the ideas that we had ever worked out the first time, and many of them never worked out the last time, either. Meeting after meeting would take the same format when there were outside managers or other chemists present: “But why don’t you just. . .” “We did. It doesn’t work.” “But then you should try. . .” “We know. We tried that. It doesn’t work.” “Well, OK, but then you could always come around and. . .” “We could. If it worked. But it doesn’t.”
New chemists would be added on to the program to try to get things moving, and they’d always come in rolling up their sleeves, muttering “Do I have to do everything myself around here. . .” How do I know? Because I was one of them. Within a month or two, though, I was in the same shape as everyone else on the project, looking at a bunch of NMRs and mass spec traces and trying to figure out what went wrong. Meanwhile, helpful folks would wander past the whiteboard and ask me how come we hadn’t tried the reaction that had just failed for the eleventh time. Eventually we learned to offer the more persistent questioners a supply of our starting material so they could solve the problem themselves and be heroic, but nothing ever came of that.
The project managed to stagger to a clinical candidate, but ran into mechanistic problems in the more advanced animal models. (That was really the hot fudge topping on the whole sundae – this was one of those therapeutic areas whose definitive animal models were too complex and costly to run until you were absolutely sure you had The Compound). I haven’t run into one quite like this since, and with any luck, I never will.
+ TrackBacks (0) | Category: Life in the Drug Labs
September 26, 2008
I wrote back in the summer about the FDA's delayed decision on Lilly's potential anticoagulant blockbuster Effient (prasugrel). Well, those three months have zipped right by, and the agency is supposed to rule today.
Prediction, for what it's worth: I think the drug will be approved, but with label restrictions for the group(s) that seemed to respond best to it in trials - who may have been. at least partly, the groups that could put up with the associated bleeding the best, too. So no elderly patients, no low-weight ones, and no one with a history of stroke or TIA. That'll cut down the market for the drug, definitely, but not as much as if it doesn't get approved at all, right? I think the FDA will require Lilly to keep a careful eye on how Prasugrel performs in the real world while they wait on the results of the next trial to come in, with a possible label-language change to come at that point.
I'll give that option about a 70% chance. The 30% chance is that they delay things yet again, since the agency has been in a delaying risk-averse mood these days. We'll know soon. This new policy of not issuing those irritating "approvable" letters has made this sort of thing rather more tense, hasn't it?
+ TrackBacks (0) | Category: Cardiovascular Disease | Regulatory Affairs
So it seems that Bristol-Myers Squibb took my advice (yeah, sure) and made an insultingly incremental counteroffer for Imclone, raising their $60/share all the way to. . .$62. I was hoping for something more like $60.25 myself, but you can’t have everything. (I should send them a bill for consulting services and see how far that gets me).
Carl Icahn has replied in yet another public letter, saying that there must be more productive ways for BMS to enrich its lawyers. I notice that the folks at the Wall Street Journal’s Health Blog are getting tired of the extended correspondence between Icahn and BMS’s Jim Cornelius. Although I’m still enjoying the show, I can see where it will eventually pall.
Icahn claims that his mystery $70/share bidder is doing due diligence, which should be completed this weekend. You’d think that any due diligence worth the name would tell someone not to pay $70/share for Imclone while Erbitux is still tied up with Bristol-Myers Squibb and its successor’s status is still very much in doubt. Wouldn’t you? Just how long does it take to run those numbers, anyway? Especially in this financial market, with credit tightening and the investment banking community in chaos? Or is the whole thing just a load of. . .no, no, Carl Icahn wouldn’t stoop to tactics like that. And I am Marie of Rumania.
My prediction: 64% chance that the companies agree, with much face-saving theater, at a price of about $65 per share. 35% chance that the whole business falls apart for now, due to the uncertainly about IMC-11F8. And that leftover 1% chance is that there really is a $70/share bidder.
+ TrackBacks (0) | Category: Business and Markets
September 25, 2008
I'm hearing reports that Pfizer is telling employees in various therapeutic areas right now that there will be deep cuts coming, and that more details will be coming out in about two weeks (individual-level layoff notices, etc.) I gather that obesity research is being hit hard, and some others as well - but any details from people in a position to know would be appreciated.
This is a heck of a time to be laid off, that's for sure. Here's hoping that things aren't as bad as I'm hearing. . .
+ TrackBacks (0) | Category: Business and Markets | Current Events
Want a hard problem? Something to really keep you challenged? Try protein folding. That'll eat up all those spare computational cycles you have lounging around and come back to ask for more. And it'll do the same for your brain cells, too, for that matter.
The reason is that a protein of any reasonable size has a staggering number of shapes it can adopt. If you hold a ball-and-stick model of one, you realize pretty quickly that there are an awful lot of rotatable bonds in there (not least because they flop around while you're trying to hold the model in your hands). My daughter was playing around with a toy once that was made of snap-together parts that looked like elbow macaroni pieces, and I told her that this was just like a lot of molecules inside her body. We folded and twisted the thing around very quickly to a wide variety of shapes, even though it only had ten links or so, and I then pointed out to her that real proteins all had different things sticking off at right angles in the middle of each piece, making the whole situation even crazier.
There's a new (open access) paper in PNAS that illustrates some of the difficulties. The authors have been studying man-made proteins that have substantially similar sequences of amino acids, but still have different folding and overall shape. In this latest work, they've made it up to two proteins (56 amino acids each) that have 95% sequence identity, but still have very different folds. It's just a few key residues that make the difference and kick the overall protein into a different energetic and structural landscape. The other regions of the proteins can be mutated pretty substantially without affecting their overall folding, on the other hand. (In the picture, the red residues are the key ones and the blue areas are the identical/can-be-mutated domains).
This ties in with an overall theme of biology - it's nonlinear as can be. The systems in it are huge and hugely complicated, but the importance of the various parts varies enormously. There are small key chokepoints in many physiological systems that can't be messed with, just as there are some amino acids that can't be touched in a given protein. (Dramatic examples include the many single-amino-acid based genetic disorders).
But perhaps the way to look at it is that the complexity is actually an attempt to overcome this nonlinearity. Otherwise the system would be too brittle to work. All those overlapping, compensating, inter-regulating feedback loops that you find in biochemistry are, I think, a largely successful attempt to run a robust organism out of what are fundamentally not very robust components. Evolution is a tinkerer, most definitely, and there sure is an awful lot of tinkering that's been needed.
+ TrackBacks (0) | Category: General Scientific News | In Silico
September 24, 2008
Over the years on this blog, I’ve written quite a few times about Ariad Pharmaceuticals and their quest to assert some rather sweeping patent rights. For background, see here and search for "Ariad" - there's a lot to read, in you're in the mood. The short version is that the company is the licensee of a patent which was issued with extremely broad claims around the NF-kappaB pathway in cells. Dozens and dozens of claims – the thing just drones on and on about compounds, methods, techniques that affect, inhibit, modulate, fill-in-your-verb anything that regulates, changes, modulates, etc. anything to do with NF-kappaB.
My problem with that is I think that claiming such broad swaths of biochemical mechanism is counterproductive. It’s bad for drug research, bad for patent law, and bad for the enterprise of science in general. For example, the company had no compounds to actually enable a lot of these claims when the patent was issued. A lot of other people did, though, because that pathway is tied up with all sorts of cellular processes, especially those dealing with inflammation and immune response. So Ariad immediately went after other companies with profitable drugs whose mechanism of action went, at least partly, through their newfound patent rights. I find it perverse that a company, rather than patenting their drug, could be able to patent the idea of how a yet-to-be-found drug might work, or retroactively, having had no role in the process at all, claim the rights to other drugs that had already been developed and marketed by someone else.
Of course, all this ended up in litigation, which has gone on for years now. There are all sorts of issues – you have the separate court cases with Lilly and Amgen, for one, and then there’s the question of whether Ariad’s patent is valid at all. I’ve chronicled some of the twists and turns – Lilly, for example, lost the first round in court (to my, and no doubt their, disbelief).
But the latest news is much more to my liking. Here’s the background: Amgen struck first in 2006, fearing a lawsuit by Ariad over the use of Enbrel – hey, it goes through NF-kappaB, so it’s fair game, right? Amgen asked for a declaration that all 203 claims of the Ariad patent were invalid. Ariad wanted that dismissed, naturally. But in September of 2006, the court turned them down, saying that Amgen did indeed have grounds to sue, since internal Ariad presentation documents specifically mentioned targeting Enbrel (and another Amgen product, Kineret) as part of their business strategy. (The court also took a moment to point out that had these documents not turned up, Ariad would have gotten its desired dismissal right there).
Ariad followed up by saying that they’d done no work related to whether the Amgen drugs infringed its patent, but they were going to do so now, by gosh, and in April 2007 they added a counterclaim that Amgen had indeed infringed 22 of the claims. (They later revised that down to nine). By January of this year, they’d dropped the Kineret part of the case and cut the list of claims down to seven.
But court found, in a summary judgment, that Amgen had indeed not infringed the Ariad patent. The use of Enbrel, they ruled, falls outside the scope of Ariad’s claims – mainly because all of Ariad’s claims related to reducing NF-kappaB activity inside the cell, and Enbrel acts on TNF-alpha exclusively outside the cell and never enters cells at all. Ariad has no case for infringement.
But there was another ruling, which I found quite interesting, and want to go into in detail. During this litigation, Amgen had proposed a broad covenant with Ariad not to sue them, and Ariad responded that sure, they’d sign that – but only covering Enbrel and Kineret. They reserved the right to sue at some future date about something else, you see.
Amgen rejected this idea, but Ariad went ahead and publicly declared that they’d abide unilaterally by their proposal – and then they turned around and asked the court to butt out of the original Amgen motion to invalidate all 203 claims of their patent, on the grounds that their covenant deprived the court of jurisdiction to consider the request. After all, they said, the only issues here were Enbrel and Kineret, and they’d promised not to sue Amgen over those, anyway! (Now you see why Ariad would go to the trouble of entering into a covenant with, basically, themselves).
Amgen didn’t go for that at all, saying that they were trying, once and for all, to settle the issue of whether Ariad could sue them on any ground related to the original patent – neglecting, as far as I can tell, to append the phrase delenda est Cartago to their filing. They disparaged Ariad’s maneuver as a last-ditch attempt to avoid arguing about invalidity and unenforceability, and said that they had no interest in leaving Ariad’s patent issues open and being sued later on at Ariad’s convenience. I’m paraphrasing here from the court documents, but not by very much, I have to tell you. There’s a distinctly irritated tone to most of the filings in this case.
The court went for Ariad on part of this, saying that Amgen’s potential pipeline of drugs and Ariad’s possible lawsuits didn’t amount to a real controversy – not compared to the other two products, which after all had been specifically mentioned in Ariad’s internal documents. Amgen’s attempts to go on with its invalidity claims were no longer on the table. But, as the latest document goes on to say, “The court reaches the opposite conclusion with regard to Amgen’s declaratory judgment claim of unenforceability”. The court held that it did indeed have jurisdiction to hear that part of the case.
Amgen’s line of argument was inequitable conduct – that when Ariad’s patent was filed, that the parties involved had not met the “candor and good faith” requirement to disclose all known information related to patentability. They claimed this both for the initial filing and for the PTO’s re-examination of the patent, but it’s the latter that was an issue in this latest ruling. Ariad had filed declarations by two expert witnesses, Inder Verma and Thomas Kadesch, during that process. Amgen claims that Verma’s statement is misleading, and that Ariad didn’t point that that he’d published articles that appear to contradict his own statements. And as for Kadesch, he was deposed by Amgen in the course of another trial that they were involved in (vs. Roche), and they claim that he recanted testimony that he’d given for Ariad in the Lilly trial, which was used by Ariad in their dealings with the PTO. (Amgen apparently had to pry the Kadesch’s reversal documents out of Roche with a subpoena).
Ariad did finally get around to submitting all these details to the PTO, but only during the course of 2007 and 2008, after the Amgen legal wrangling was underway. And Amgen claims that they dumped most of the really hot stuff in with a pile of other things, so as not to call attention to any of it.
As it turns out, the court found that Ariad had behaved properly with respect to the Kadesch documents – but the Verma stuff was another matter. In his statement for Ariad, Verma said several times that the actions of glucocorticoids through NF-kappaB were poorly understood, not known, etc. But his own articles concluded that glucocorticoids repressed NF-kappaB-mediated transcription, making those statements hard to reconcile.
If this was indeed evidence of inequitable conduct, the case all turned on the criteria worked out in a previous case (Rohm and Haas) on whether Ariad had voluntarily submitted this later evidence to the PTO. The court found that there was no evidence for that, that Ariad had only disclosed these documents under threat of Amgen’s litigation, and that Amgen’s motion for (partial) summary judgment on equitable conduct was thus granted.
So not only does Ariad have a ruling that interprets its claims in a way it doesn't like (and lets Amgen off the hook, besides), it also has one that raises significant concerns about inequitable conduct, and calls the entire enforceability of its patent into question. Not a good day for them - but a good day for common sense.
+ TrackBacks (0) | Category: Patents and IP
September 23, 2008
Over the years, when some puzzling feature of a drug candidate’s binding to a target came up, I’ve often said “Well, we’re not going to know what’s happening until some lunatic builds a femtosecond X-ray laser”. Various lunatics are now pitching in to build some. I’m going to have to revise my lines.
The reason I’d say such a mouthful is that we already, of course, get a lot of structural information from X-ray beams. Shining them through crystals of various substances can, after a good deal of number-crunching in the background, give you a three-dimensional picture of how the unit molecules have packed together. Proteins can be crystallized, too, although it can be something of a black art, and they can be either crystallized with or soaked with our small molecules, giving us a picture of how they’re actually binding.
There are, as mentioned earlier around here, plenty of ways for this process to go wrong. For starters, a lot of things – many of them especially interesting – just don’t crystallize. And the crystals themselves may or may not be showing you a structure that’s relevant to the question you’re trying to answer – that’s particularly true in the case of those ligand-bound protein structures. And the whole process is only good for static pictures of things that aren’t moving around. It used to take many days to collect enough data for a good crystal structure. That moved down to hours as X-ray sources got brighter and detectors got better, and now X-ray synchrotrons will blast away at your crystals and give you enough reflections inside of twenty minutes. And that’s great, but molecules move around a trillion times faster than that, so we’re necessarily seeing an average of where they hang out the most.
Enter the femtosecond X-ray laser. A laser will put out the cleanest X-ray beam that anyone’s ever seen, a completely coherent one at an exact (and short) wavelength which should give wonderful reflection data. The only ways we know how to do that are on large scale, too, so it’s going to be a relatively bright source as well. The data should come so quickly, in fact, that several things which are now impossible are within reach: X-ray structures of single molecules, for one. X-rays of things that aren’t in a crystalline state at all, for another. And femtosecond-scale sequential X-ray structures – in effect, well-resolved high-speed movies of molecular motions.
Now that will be something to see. Getting all that to work is going to be quite a job, not least because X-ray bursts of this sort will probably destroy the sample that they're analyzing. But there are two free-electron X-ray lasers under construction – one set to complete next year at Stanford’s SLAC facility and a larger one that will be built in Hamburg. “Large” is the word here. The smaller SLAC instrument is already two kilometerslong. According to an article in Nature, though, a Japanese group have proposed some ways to make future instruments smaller and more efficient – all the way down, to, um, the size of a couple of football fields. But there’s another completely different technology coming along (laser-plasma wakefield instruments) that could produce far shorter X-rays in one hundredth the space, which is more like it.
I don’t think we’re going to see a benchtop-sized X-ray laser any time soon, especially since these things are going to need to be large just to get up to the brightness that will be needed. But I’m very interested to see what even the first generation machine at Stanford will be able to do. There are a lot of mysteries in the way that molecules move and interact, and we may finally be about to get a look at some of them.
+ TrackBacks (0) | Category: Analytical Chemistry
September 22, 2008
Science is taking a look at the 1991 members of Yale’s Molecular Biology and Biophysics PhD program. The ostensible focus of the article is to see what the effect of flat federal research funding has been on young potential faculty members, but there’s a lot more to pick up on than that.
The first thing to note is that out of 26 PhDs from that year’s class, only one of them currently has a tenured position in academia. Five others are doing science in some sort of academic setting, but only one of those is tenure-track. And you can tell that for at least a few observers, the response to those numbers is “What went wrong?”
Well, nothing did. As it turned out, the students didn’t necessarily come out of the program on a mission to go out and get tenure. But there was no particular way to blame the research funding environment for the numbers, because almost no one that Science interviewed mentioned that as a factor at all. Instead, many of them decided that there might be something more (or at least something else) to life than going from being a grad student and post-doc directly to. . .supervising more grad students and post-docs:
For some MB&Bers, academia was never really an option. "Even as an undergraduate in college, I never bought into the concept of being a professor," says Deborah Kinch, associate director for regulatory affairs at Biogen Idec in Cambridge. "Being a grad student is the last bastion of indentured servitude, and being a faculty member is pretty much the same thing, at least until you get tenure. Earning the same low salary and fighting for every grant--that was the last thing I wanted to do. . .
. . . Midway through their graduate training, a few MB&Bers hatched the idea of a seminar series to hear from former graduates working outside the academic fold. (Athena) Nagi said the group wrestled with the definition of an alternative career and decided that the answer was, in essence, "anything that didn't involve teaching at a major research university”. . .what (Tammy) Spain remembers most were their reasons for branching out. "They all said they didn't want to go into academia. None of them said, 'I failed.' None had even tried to find an academic job. It was the first time I got the sense that there was no shame in not going into academia."
That heightened sense of empowerment reinforced what some class members were already feeling. "At first, you think that academia makes sense," says Nagi. "But by your 3rd or 4th year, you start to get the lay of the land and look at the options. You realize that a postdoc isn't just for 1 year and that there are multiple postdocs."
I particularly like the way that a third-year graduate student had never realized until then that there was no shame in not going into academia. This is a major problem in academic science – the amount of this attitude varies from department to department, but there’s always some of it floating around. It’s no wonder that some of these people were baffled by the prospect of what they were going to do with their lives, because a large, important range of choices was being minimized or ignored.
But I have no room to talk – by that point in my graduate career, I wasn’t clear about what I was going to do, either. I was getting pretty sure, though, that going off and fighting for tenure at a major university was not in the running. I’d seen what the younger faculty put up with in my department, and it didn’t look much better than the life I was leading as a grad student. In many ways, actually, it was worse. Why would I want to do that?
As it turns out, a good number of the 1991 Yale people ended up at various small biotech companies. Some of them have made a success of it, and naturally enough, some of them are out of science altogether. But the rarest, least likely thing for them to do was to get tenure – or even to try. When I think back on the folks I went to grad school with in the mid-1980s, the picture is very similar. You just wish that there were a way to make this sorting-out process less painful. . .
+ TrackBacks (0) | Category: Academia (vs. Industry) | Graduate School
September 19, 2008
A colleague mentioned to me the other day that Sunesis Pharmaceuticals had let many of its remaining research staff go back during the summer – they’re battening down to try to get their main clinical candidate through for leukemia and ovarian cancer. That’s a common phase of life for a small company trying to go it alone. Clinical trials are expensive, and so are scientists, and sometimes a company finds that it can’t afford both at the same time. Amylin, to pick one example, went through so many cycles of that (starting in the mid-1990s) that I completely lost count.
The Sunesis news struck me, though, because if you go back a few years in the literature, they’re all over the place. The company was aggressively investigating (and promoting) a technique called “tethering” as a platform for drug discovery. Back around 2003, they were all over the journals with it.
Tethering was one of those neat ideas which seems to have been a lot of work to reduce to practice. It’s a variation, in its way, of another one of those techniques called Dynamic Combinatorial Chemistry. In DCC, you take a good-sized collection of compounds which can form reversible bonds with each other. Thiols (R-SH) have been used a lot, since they can form disulfides (R-SS-R), which can easily come apart and re-form with other thiols. In the presence of some target or template, such as the binding site of a protein, the idea is that any disulfide combination that manages to bind well will get enhanced in the final mixture, since it spends more time out of the swim of potential reactants. Comparing the product distribution with and without the target protein can point you to a potential lead structure to optimize. (You can also turn it around and make synthetic receptors (PDF) for molecules that you're interested in).
The idea behind tethering was, at least in one of its main variations, to introduce an extra thiol group into a target protein somewhere close to its active site. Then this mutant protein would be screening against a library of small molecules with thiol groups of their own, with the idea that if there was a binding site near that thiol that it would be found by preferential disulfide formation between it and some member of the screening library. Then came the second step. Normal, unmutated protein would be exposed to a mix of that preferred thiol and a library of other potential thiol coupling partners, in an attempt to find another preferred extension into the binding cavity. So this was basically a way to do DCC, but giving it a leg up by trying to make sure that there was a good amount of at least one thing that could bind to some relevant part of the target.
That tells you that standard from-the-ground-up DCC must have some difficulties, since if it worked as well as its concept you wouldn’t need to put your thumb on the scales like that. But I was never sure how well tethering worked, either. The company published numerous examples of it, but I don’t know if any of these compounds ever got anywhere (and indeed, I’m not at all sure that their current clinical candidate was discovered by this technique).
There are several places where things could break down. Making a mutant protein introduces some uncertainty, for starters. That SH group might not change things, or it might change them just enough so that the binding site you find doesn’t quite exist when you switch to the wild type. And any binding site you find in the first round isn’t necessarily a productive one – the original protein SH group was targeted to try to dangle out over the right part of the protein, but there are no guarantees about that. Past that, even if you get through the second round and find some new disulfide hits (no sure thing), they are, well. . .they’re disulfides. And those are poor bets for drugs.
That’s where the real weak point of DCC is in general, to my mind. Using reversible reactions gives you compounds with too much potential to fall apart, so the first thing you have to do is replace those bonds with something sturdier – and that’s not always easy, or even possible. There are very, very few clean substitutions available in the chemical world. Nothing’s quite like a nitrile except a nitrile, and there’s only one thing shaped exactly like a t-butyl group: another t-butyl. Likewise, the only thing that’s guaranteed to look and act like a disulfide is a disulfide. A two or three carbon chain replacement is the logical place to start, but that might be synthetically tricky, or (even more often) might turn out to be a completely different sort of compound once you’ve made it.
In the end, I think tethering turned out to be an excellent means to get some very interesting papers published in some good journals. (The publications have continued to this day). But beyond that, I’m not so sure. I’d be glad to hear from any ex-Sunesis people with other views. . .
+ TrackBacks (0) | Category: Cancer | Drug Industry History
September 17, 2008
I did carbohydrate chemistry for my PhD - well, I used carbohydrates as starting materials to make other molecules, but I did my share of pure carbohydrate stuff along the way. And although that was over twenty years ago, the stuff I did is still considered by most people to be a sort of esoteric thing, an odd specialty that not many people have experience with. Time has clearly not mainstreamed sugar chemistry.
It's not like people don't use the things, often for just the reasons that I used to (as versatile chiral starting materials). But the reputation of the compounds lingers. I think it's because of all the odd little reactions that sugars do. There's a certain amount of knowledge that has to be learned - all that stuff with the anomeric center, for starters, and all the name reactions that only occur in sugars, like the Ferrier rearrangement.
Then there are the protecting groups. With all those hydroxys hanging around, a lot of them are going to have to be tied up for extended periods while your work gets done. But every hydroxy group on a sugar ring has a slightly different personality - they acylate and deacylate in a particular order, for one thing, which varies from one sugar system to another. And there are the acetals and ketals to tie up two hydroxyls at once - very useful, but there are a lot of different combinations that can form under different conditions and with different carbonyl reactants.
The closest analog to the field that I can think of is steroid chemistry. In its day, that was a hugely popular and important field, with all sorts of ins and outs - tricky transformations that you learned from the old hands. But these days, hardly anyone cares - pure steroid chemistry is a backwater, and many of the esoteric reactions are largely forgotten. Sugar chemistry has escaped that fate - it's still relevant - but hasn't escaped the atmosphere of an eccentric club.
My own sugar knowledge, while still sound, is not exactly up to date. I know that the field has moved on over the years, but I've had only sporadic need to keep up, since carbohydrates don't appear in many drug structures. I've been able to work in some of them once in a while, but I've never worked on a project where my sugar experience has been front and center.
+ TrackBacks (0) | Category: Life in the Drug Labs
While the US has the world's most expensive prescription drugs, we have the world's cheapest generics: once that patent goes away, it goes away. But the generic drug business is still very profitable, and it's viciously competitive. One of the biggest players is India's Ranbaxy, now in the process of being acquired by Japan's Daiichi Sankyo. They compete hard at every step of the process, from fighting patent cases in order to make drugs go generic more quickly, right down to price and distribution to pharmacies.
But it looks like they've been pushing it a bit too hard. The FDA has banned the import of thirty Ranbaxy-made drug substances after uncovering what they say are bad practices at three of the company's plants in India. And this comes on top of another investigation, an even more serious one, looking into whether the company out-and-out falsified data during the drug approval process.
The company seems to be co-operating with the first investigation, but they're fighting back hard on the second one - which makes sense, because that's the one that can really get them in trouble. Ranbaxy, for its part, seems to have suggested that some big-pharma rivals are behind the accusation. I doubt that myself, although it's not impossible - but neither is it impossible that the charges have something behind them. US companies have found themselves in big trouble over such issues, too.
Overall, what Ranbaxy and the other Indian drugmakers have to fear is ending up in the same public opinion category as the Chinese companies, who have had one quality scandal after another. It's going to be a long time before they lose their bad reputation, and the Indian firms definitely don't need to throw away what they've built up. Look for Ranbaxy to try to clear its name as fast and as publicly as possible.
+ TrackBacks (0) | Category: Business and Markets
September 16, 2008
I’ve neglected to note the death of Neil Bartlett, famous for showing that the noble gases would in fact form chemical bonds. This work was a real triumph, since the great majority of scientific opinion at the time was that such compounds were impossible. Bartlett, though, formed a rather startling compound while working on the platinum fluorides, which he realized was actually a salt of dioxygen. The idea that oxygen would be oxidized to a cation in an isolable salt was weird enough at the time, and Bartlett realized that if this could happen, then the same system should be able to oxidize xenon.
And so it did. It’s difficult to convey how much nerve it takes to do experiments like this. I don’t mean the dangers of working with such reactive fluorine compounds, although that’s certainly not to be ignored. (Bartlett spent much of his career working in this area, and only a skilled experimentalist could do that and remain in one piece). No, it’s actually very hard to get out there on the edge of what’s known and do things as crazy as making salts of oxygen and fluorides of noble gases, Consider that if you’d lined up a hundred high-ranking chemists to vet these experiments beforehand, most of them would have pursed their lips and said “Are you sure that you’re not just wasting your time on this stuff?” It takes nerve, and not everyone has it – but Bartlett did, and he had the brains and the skills to go along with it. You need all three.
There’s a good appreciation of him in Nature, which points out – to my mind, absolutely correctly – that he should have won the Nobel Prize for this work. In fact, I thought he had for a long time, and only a few years ago realized that I had that wrong. (I may have been reinforced in my opinion by a statement in Primo Levi’s The Periodic Table). I think that if you polled chemists as a group, you’d find that a majority would be under the same impression – and if that’s not a sign of the highest-level work, having everyone surprised that you never got a Nobel, then I don’t know what is.
+ TrackBacks (0) | Category: Inorganic Chemistry | Who Discovers and Why
September 15, 2008
I need some cheering up this morning – one of my favorite writers, David Foster Wallace, has died most unexpectedly. Perhaps, in looking back over his best work, it wasn’t as unexpected as all that, but you still never see these things coming.
So I’m glad to report, by contrast, that Dr. Matthias Rath has some problems of his own. Rath, some of you may recall, is one of those people who usually has “controversial” somewhere in front of his name in news articles. I’ve never thought of him that way myself: he’s always seemed just a particularly brazen and heartless con artist. He’s made large sums of money by telling HIV-infected patients that antiretroviral drugs are killing them, and that they should instead cure themselves with vitamin supplements purchased from, yes, Dr. Rath. His rants about the pharmaceutical industry are contemptible – Rath claims, naturally, that we’re a gang of evil poisoners, which is at least a field that he knows something about. He’s one of those people that you’re ashamed to share DNA homology with.
To be scrupulously fair, Rath appears to have distributed his supplements for free to the poorest patients in places like South Africa, which has surely brought down his average profit-per-suffering-death. But he’s been happy to tell wealthier customers in the US and Europe that he can not only cure HIV infection, but various cancers and other fatal ailments, with no convincing data of any kind to back up such claims.
Ben Goldacre, the estimable Bad Science columnist for the Guardian newspaper, ran a column in early 2007 on Rath and his work in South Africa, and followed that up with two more containing disparaging references. Not caring for this sort of publicity, the Dr. Rath Foundation sued for libel. (Goldacre is no stranger to threats of legal action, it seems). I am happy to report that the suit has now been dropped, and that Rath has been ordered to pay legal costs, which are gratifyingly extensive.
It now seems that the Dr. Rath Foundation is moving on to the profitable Russian market – with plenty of bad health and plenty of money sloshing around, it would seem a natural feeding ground for a creature of his type. I hope that the Guardian is able to collect its money in short order, and that Ben Goldacre gets a cut.
+ TrackBacks (0) | Category: Snake Oil
September 12, 2008
I haven’t mentioned the attempt by Bristol-Meyers Squibb to buy out Imclone until now, but there’s a nice . The reasons for the move are unsurprising – BMS would like all the revenue from Erbitux, instead of just a share of it, and sees some value coming up in Imclone’s pipeline (such as their development drug candidate IMC-11F8, vide infra). They’ve waiting quite a while, and apparently feel that the time is right – the only question is how much money such a move will cost them.
And that’s the question, all right, since Carl Icahn started talking this week about a mysterious preliminary offer from some unnamed other company for significantly more money ($70/share) than BMS is putting up. A lot of investors seem to have expected a sigh, a roll of the eyes, and a reach back into the pocket for more money - IMCL has been trading above the original $60/share offer. But that’s not what they’re getting, at least so far.
In a letter, Bristol-Meyers Squibb’s CEO is now reminding Icahn of a few things that you’d think would be obvious. One of them is that their offer is well-supported and requires no due diligence, as opposed to nebulous preliminary figures from companies that no one will name. The next paragraph is even more to the point:
As you know, Bristol-Myers holds the exclusive, long-term marketing rights in the United States to ERBITUX® and related compounds, including IMC-11F8. Bristol-Myers has no intention of agreeing to any modifications to these rights. ImClone also should understand that our offer is for the entire company, and any potential restructuring of the company could severely jeopardize ImClone’s value and deprive ImClone’s stockholders of the benefits of our offer.
That’s about the size of it, and I think that this message is being delivered in the way that Icahn understands best – right across the top of the head, with some good wrist action. There’s no reason for BMS to give up on their rights to Imclone’s products, except on terms that would make other potential buyers lose interest. Why would they? There is, I should add, quite a dispute between the two companies about who has the rights to that development antibody, IMC-11F8. Imclone has recently been acting as if BMS has no rights to it at all, but as that WSJ link makes clear, two years ago they clearly stated to Merck KGaA that the antibody falls within the scope of the BMS agreement. It's hard for me to see how they'll get out of that, and even if they do, it'll take a lot of expensive wrangling.
So, if there really is a company willing to go to $70 a share for Imclone, with revenue still flowing to BMS and plenty of legal uncertainty on top of that, well, this is the time for them to speak up. I’m not sure that there is one, despite what Icahn says, but perhaps he’s hoping for one to materialize. He’s always reckoned Imclone to be worth vast amounts more than people who know anything about oncology think it is, so maybe he sees no problem with those figures. Anyone else live in the same world?
Update: Icahn has already replied, in a fashion that makes this affair look to go on a while. He says that he "doesn't understand the point" of the BMS letter, and goes on to say:
. . .With respect to a potential restructuring of ImClone, rest assured that we will act in what we consider the best interests of all our shareholders and not just Bristol.
Obviously, should you wish to make another offer which you believe we would not find inadequate, you are free to do so. Upon receipt of that offer, we will respond appropriately.
Well! My guess is at this point that BMS will sit tight and wait to see if anyone really wants to get in on all this action - betting, reasonably I think, that no one will. I would enjoy it if they raised their bid to, say, $60.25, just to steam up Icahn's windows, but I assume that they're above that. As time goes on, with no competing bids in sight, I would think that Icahn and his board-of-buddies would have to submit the BMS bid to the shareholders - wouldn't they?
+ TrackBacks (0) | Category: Business and Markets | Cancer
September 11, 2008
There’s an interesting editorial in Nature Biotechnology on a role-playing exercise that took place recently in London. The UK government (in the form of the Bioscience Futures Forum) asked a University of London simulations group to work out what would happen to two identical companies in England and in the US. These would be university spin-offs with promising oncology compounds that had already shown oral activity in tumor models. (Here's the site for the whole effort - I have to say, it looks like an awful lot of effort for a two-day simulation).
What happened? Well, things diverged. The US version of the simulated company was able to raise more money, had better access to collaborations with larger companies, and better chances of going public by the end of the simulation. That gave them a broader platform to deal with setbacks in the original compound program. Meanwhile, the UK company faced this:
. . . the biotech finance marketplace in the United Kingdom is weak. AIM has little liquidity and virtually no follow-on market. Preemption rights allow existing shareholders to block potentially diluting but opportunistic fundraising rounds, such as private investments in public equity. And there is little access to debt capital for biotech firms.
The game also suggests that UK management and investors have mindsets adapted to constrained financial circumstances. They design businesses to fit the financial environment rather than seeking the environment that their business needs. They discount early valuations because of the inflexible later-stage financial circumstances. Their low expectations become self-fulfilling prophecies. In contrast, US management looks to build a sustainable business from the outset, and investors get higher returns as a consequence.
What I found interesting about the editorial, though, wasn’t these conclusions per se – after all, as the piece goes on to say, they aren’t really a surprise. (That makes you wonder even more about the time and money that went into this, but that's another issue). No, the surprise was the recommendation at the end: while the government agency that ran this study is suggesting tax changes, entrepreneur training, various investment initiatives, and so on, the Nature Biotechnology writers ask whether it might not be simpler just to send promising UK ideas to America. Do the science in Great Britain, they say, and spin off your discovery in the US, where they know how to fund these things. You'll benefit patients faster, for sure.
They’re probably right about that, although it’s not something that the UK government is going to endorse. (After all, that means that the resulting jobs will be created in the US, too). But that illustrates something I’ve said here before, about how far ahead the VC and start-up infrastructure is here in America. There’s no other place in the world that does a better job of funding wild ideas and giving them a chance to succeed in the market. The startup culture here a vital part of the economy and a great benefit for the world, and we should make sure to keep it as healthy as we can.
+ TrackBacks (0) | Category: Business and Markets | Who Discovers and Why
September 10, 2008
The rumor seems to be going around that Pfizer might be making a bid for Bayer (aka Bayer/Schering). That sounds ridiculous to me, and if Pfizer actually does such a thing, then its management is even more starved for ideas than its nastiest critics could believe.
Why all the negativity? Well, Bayer doesn’t seem to be much of a fit, for one thing. The company’s Nexavar (sorafenib) oncology drug competes directly head-to-head with Pfizer’s Sutent (sunitinib), and a good chunk of that revenue goes to Onyx, anyway. (Which reminds me – I keep seeing mentions of that drug being an Onyx discovery which was picked up by Bayer, which isn’t right. That one was made at Bayer – why Onyx has a piece of it has to do with the biology, not the drug discovery). The market for kidney cancer would be completely tied up by a Pfizer/Bayer deal, which makes you wonder if the resulting behemoth would be required to divest one of the drugs.
Pfizer does like to pick up big-selling compounds by buying the whole company behind them, but Bayer/Schering doesn’t have anything in the Lipitor / Celebrex class right now. (Remember Celebrex?) They might have one coming, though, with their Factor Xa inhibitor, rivaroxaban: it’s expected to do very well in the extremely lucrative clotting market, but it’s not there yet. And besides, some of that one is already tied up with J&J, at least in the U.S.
Then there’s the general objection: I’d argue that Pfizer is in the shape it’s in because they’ve pursued the big, big, acquisition strategy. Their own labs have been unproductive, and they unfortunately seem to spray down the research organizations they purchase with whatever’s in the air supply at the home base. OK, that’s probably unfair – but no one can deny that as a whole, Pfizer’s internal drug discovery efforts have been remarkably frustrating for many years now. And they’ve got a massive cost structure, what with all the various facilities they’ve accumulated over the years, which is what’s led to things like their mass exodus from Michigan.
More of that sort of thing is what I expect from Pfizer, not some big acquisition. (And I suppose that it should be mentioned that it’s now a widely held belief that more layoffs are coming there this fall, anyway). But if they buy something, it won’t be pretty. What they need is revenue to replace Lipitor in a few years, not people or research facilities. And that’s another reason that a Bayer purchase makes no sense – have you looked into how hard it is to lay people off or close a site in Germany? Years, it takes years, and buckets of money – just what Pfizer doesn’t need to take on.
So if you need an excuse to dump Pfizer’s stock (and why, exactly, would you be holding Pfizer stock?) a purchase of Bayer would be the perfect signal that they’ve lost their minds in Groton. I don’t think they have, though. Not completely. Not quite yet.
+ TrackBacks (0) | Category: Business and Markets
September 9, 2008
As I’ve noted here, and many others have elsewhere, we have very little idea how many important central nervous system drugs actually work. Antidepressants, antipsychotics, antiseizure medications for epilepsy – the real workings of these drugs are quite obscure. The standard explanation for this state of things is that the human brain is extremely complicated and difficult to study, and that’s absolutely right.
But there’s an interesting paper on antipsychotics that’s just come out from a group at Duke, suggesting that there’s an important common mechanism that has been missed up until now. One thing that everyone can agree on is that dopamine receptors are important in this area. Which ones, and how they should be affected (agonist, antagonist, inverse partial what-have-you) – now that’s a subject for argument, but I don’t think you’ll find anyone who says that the dopaminergic system isn’t a big factor. Helping to keep the argument going is the fact that the existing drugs have a rather wide spectrum of activity against the main dopamine receptors.
But for some years now, the D2 subtype has been considered first among equals in this area. Binding affinity to D2 correlates as well as anything does to clinical efficacy, but when you look closer, the various drugs have different profiles as inverse agonists and antagonists of the receptor. What this latest study shows, though, is that a completely different signaling pathway – other than the classic GPCR signaling one – might well be involved. A protein called beta-arrestin has long been known to be important in receptor trafficking – movement of the receptor protein to and from the cell surface. A few years ago, it was shown that beta-arrestin isn’t just some sort of cellular tugboat in these systems, but can participate in another signaling pathway entirely.
Dopamine receptors were already complicated when I worked on them, but they’ve gotten a lot hairier since then. The beta-arrestin work makes things even trickier: who would have thought that these GPCRs, with all of their well-established and subtle signaling modes, also participated in a totally different signaling network at the same time? It’s like finding out that all your hammers can also drive screws, using some gizmo hidden in their handles that you didn’t even know was there.
When this latest team looked at the various clinical antipsychotics, what they found was that no matter what their profile in the traditional D2 signaling assays, they all are very good at disrupting the D2/beta-arrestin pathway. Since some of the downstream targets in that pathway (a protein called Akt and a kinase, GSK-3) have already been associated with schizophrenia, this may well be a big factor behind antipsychotic efficacy, and one that no one in the drug discovery business has paid much attention to. As soon as someone gets this formatted for a high-throughput assay, though, that will change – and it could lead to entirely new compound classes in this area.
Of course, there’s still a lot that we don’t know. What, for example, does beta-arrestin signaling actually do in schizophrenia? Akt and GSK-3 are powerful signaling players, involved in all sorts of pathways. Untangling their roles, or the roles of other yet-unknown beta-arrestin driven processes, will keep the biologists busy for a good long while. And the existing antipsychotics hit quite a few other receptors as well – what’s the role of the beta-arrestin system in those interactions? The brain will keep us busy for a good long while, and so will the signaling receptors.
+ TrackBacks (0) | Category: Biological News | The Central Nervous System
September 8, 2008
Since I was just banging on the table (or the lab bench) the other day about how many diseases aren’t single-factor, and about how many diseases (like cancer) aren’t even single diseases, I thought this would be a good time to haul out some evidence for that. The data are here thanks to some recent papers by groups who are sequencing various tumor lines, looking for common mutations as new drug targets. (The Cancer Genome Atlas, an NIH project, is behind a lot of work in this area).
But what’s become clear, if it wasn’t already, is that various cancer lines have a startlingly wide array of mutations. Recent work from Bert Vogelstein’s group at Johns Hopkins (with a host of collaborators) and from the CGA itself now show that there are an average of 63 mutations in pancreatic cancer cells, and 47 in glioblastomas, two of the nastiest tumors around. The first impulse might be to think “Great! Plenty of drug targets to go around!”
But hold on. For one thing, even though these mutations are surely not all equal, the fact that there are so many makes you wonder about whether attacking any one of them alone can make much of a difference. And different patients can have varying suites of those mutations, so it’s difficult to imagine that going after just one or two of those targets will be enough to treat a majority of cases. This work follows up on earlier studies in other tumor lines, all of which seem to point in the same direction: patients who are currently classed as having the same type of cancer really don’t.
This won’t come as a surprise to most oncologists, who have seen for themselves the widely varying responses to current therapies. The challenge is to figure out what these various changes mean, and how to classify patients to give them the best therapy. It’s not going to be easy. Just doing the math on the possible interactions of several dozen mutations with a list of possible treatment regimes is enough to make you pause. The hope is that most patients will fall into broad categories, which will line up, more or less, with broad categories of treatment. But it’s not going to be a good fit, most likely, and even getting those approximations to work is taking a lot of time and effort. (Just think back about how long you’ve been hearing about the wonderful new age of personalized medicine. . .)
We're not going to be able to do this, either, without a second (and much harder) stage of research: figuring out why these various mutations are important. Some of them seem to make reasonable sense, but it's not at all clear what a lot of them are doing, especially in concert with each other. There's an awful lot of ditch-digging work out there waiting to be done. For now, the quotes from Vogelstein in a Nature News summary can’t be improved on, though. This is the current state of the art, and it’s up to us to improve on it:
"It is apparent from studies like ours that it is going to be even more difficult than expected to derive real cures. . . It is extremely unlikely that drugs that target a single gene, such as Gleevec, will be active against a major fraction of solid tumours”
+ TrackBacks (0) | Category: Cancer | Drug Development
September 5, 2008
Today’s ration of scientific confusion comes courtesy of Wired, in an article that talks about using a modified form of TMV (tobacco mosaic virus) for delivering silencing RNAs. A group at Maryland has used the virus to deliver various siRNAs to cell lines in vitro, which is an interesting idea. But then it gets the Wired treatment:
The short, double-stranded RNA molecules known as siRNA can program cells to destroy disease-causing proteins. Their molecules turn on a cell's own built-in disease-fighting mechanisms. They can be programmed for a wide range of ailments -- from cancers to viruses -- and because they use the cell's own defense mechanisms, they produce minimal side effects.
In addition to treating cancers and genetic disorders, siRNA could prove useful against a variety of rare diseases that have, and always will be, overlooked by big pharmaceutical companies -- the long tail of disease.
People suffering from similar, exotic maladies could band together and recruit a small team of scientists, as if they were the Seven Samurai, to champion their cause and quickly design a cure.
Let’s unravel some of that yarn. What siRNA does, actually, is cause proteins not to be produced, rather than “program cells” to destroy them. The effect lasts for as long as the siRNA is present, so I wouldn’t use the analogy to programming. And it’s true that siRNAs can “turn on a cell’s own built-in disease-fighting mechanisms”, but that’s mostly considered an undesired off-target side effect, which people are still trying to get a handle on. You don’t want to set off immune responses to your RNA therapies, believe me.
And in the next sentence, we get to hear more about programming. But what’s glossed over is that we don’t know how to “program” siRNAs for a wide range of ailments yet, because in most cases we don’t know what causes a wide range of ailments to start with. If you don’t know what protein you want to knock down, you’re not going to get very far with siRNA. And what about the diseases that aren’t caused by single proteins (which is most of them?) Putting cancer in a list like that is a sure sign that the author is either exaggerating or doesn’t understand what’s going on, because cancer is not a disease. It’s several thousand diseases, each of which may need to be addressed differently if we’re going to use the word “cure”.
The next paragraph works in the “long tail” concept, another hook for the intended audience. But look, for example, at something like Gaucher’s disease, which you’d think was pretty far down that tail. Genzyme is doing tremendous business there, because they actually have something – basically the only thing - that helps. For many of these obscure conditions, it’s not so much that we in the drug industry don’t do anything, it’s that we don’t know what to do. And if we’re going to work on something that we’re not sure we can treat or not, which is the usual situation, we’d rather take our chances on something more potentially lucrative.
And that last line, with the Kurosawa reference, is just great. Programming, long-tail, classic foreign movies – this piece must have gone through the editorial process at Wired in about ten minutes. I’ll bet my readers in the drug industry are wondering how they can get together in small teams, whip out their samurai swords, and quickly design cures – admit, you are, aren’t you? Well, the next paragraph of the piece quotes Stephen Hyde of Oxford:
” “The speed with which you develop siRNA drugs is truly amazing,” said Stephen Hyde. “In the past, a traditional small molecule drug might take several years of intensive research effort by a large team of scientists to develop. Today, with siRNA technology, it is possible for a single researcher to develop a drug candidate in a few weeks.”
It’s hard to know which end of that statement to untangle first. If you know exactly which protein you want to target for a disease, then yes, you can then know what sort of siRNA sequence you want to try to knock it down. But is that a drug, as the first line suggests? Nowhere near. Sad to say, you still have those years and years of clinical testing for safety and efficacy to go through.
Now, where Prof. Hyde’s statement makes some sense is in the preclinical world. It does take longer for a team of chemists and biologists to come up with a small-molecule drug candidate, and that’s where the promises of siRNA (and antisense DNA) come in. If you’re targeting the expression of a particular protein (a big if, as I’ve said), then you immediately have a relatively short list of sequences to try, as opposed to the wide-open world of small molecule screening. Chemistry really is only one way to get to a drug candidate, and just because it’s been the way for most drugs until now doesn’t mean it always will be.
But it’s not going to go away, either. Small molecules can do things that changes in protein expression can’t – we can make agonists and antagonists of receptors, for one thing, and we can make inhibitors with varying selectivities across related targets. And there will always be diseases – the majority of diseases – where several things will have to be affected at the same time for any kind of cure to be realized. We’re going to need all the modes of attack we can get.
The rest of the Wired article, to its credit, does mention the single biggest problem with siRNAs: their delivery in vivo. And if you get down to the last few sentences, you can find out that the TMV delivery system has not yet been shown to work in a living animal, could cause immune responses even when it does, and has (as yet) no way to target its delivery to a specific cell population. It is, in other words, an ingenious idea – one of many – that has a long way to go before it sees a sick patient. And we have a long way to go before we have seven-scientist samurai teams cranking out cures in a few weeks. Perhaps we’ll live long enough to see it.
+ TrackBacks (0) | Category: Drug Development | Press Coverage
September 4, 2008
X-ray crystallography is wonderful stuff – I think you’ll get chemists to generally agree on that. There’s no other technique that can provide such certainty about the structure of a compound – and for medicinal chemists, it has the invaluable ability to show you a snapshot of your drug candidate bound to its protein target. Of course, not all proteins can be crystallized, and not all of them can be crystallized with drug ligands in them. But an X-ray structure is usually considered the last word, when you can get one – and thanks to automation, computing power, and to brighter X-ray sources, we get more of them than ever.
But there are a surprising number of ways that X-ray data can mislead you. For an excellent treatment of these, complete with plenty of references to the recent literature, see an excellent paper coming out in Drug Discovery Today from researchers at Astra-Zeneca (Andy Davis and Stephen St.-Gallay) and Uppsala University (Gerard Kleywegt). These folks all know their computational and structural biology, and they’re willing to tell you how much they don’t know, either.
For starters, a small (but significant) number of protein structures derived from X-ray data are just plain wrong. Medicinal chemists should always look first at the resolution of an X-ray structure, since the tighter the data, the better the chance there is of things being as they seem. The authors make the important point that there’s some subjective judgment involved on the part of a crystallographer interpreting raw electron-density maps, and the poorer the resolution, the more judgment calls there are to be made:
Nevertheless, most chemists who undertake structure-based design treat a protein crystal structure reverently as if it was determined at very high resolution, regardless of the resolution at which the structure was actually determined (admittedly, crystallographers themselves are not immune to this practice either). Also, the fact that the crystallographer is bound to have made certain assumptions, to have had certain biases and perhaps even to have made mistakes is usually ignored. Assumptions, biases, ambiguities and mistakes may manifest themselves (even in high-resolution structures) at the level of individual atoms, of residues (e.g. sidechain conformations) and beyond.
Then there’s the problem of interpreting how your drug candidate interacts with the protein. The ability to get an X-ray structure doesn’t always correlate well with the binding potency of a given compound, so it’s not like you can necessarily count on a lot of clear signals about why the compound is binding. Hydrogen bonds may be perfectly obvious, or they can be rather hard to interpret. Binding through (or through displacement of) water molecules is extremely important, too, and that can be hard to get a handle on as well.
And not least, there’s the assumption that your structure is going to do you good once you’ve got it nailed down:
It is usually tacitly assumed that the conditions under which the complex was crystallised are relevant, that the observed protein conformation is relevant for interaction with the ligand (i.e. no flexibility in the active-site residues) and that the structure actually contributes insights that will lead to the design of better compounds. While these assumptions seem perfectly reasonable at first sight, they are not all necessarily true. . .
That’s a key point, because that’s the sort of error that can really lead you into trouble. After all, everything looks good, and you can start to think that you really understand the system, that is until none of your wonderful X-ray-based analogs work out they way you thought they would. The authors make the point that when your X-ray data and your structure-activity data seem to diverge, it’s often a sign that you don’t understand some key points about the thermodynamics of binding. (An X-ray is a static picture, and says nothing about what energetic tradeoffs were made along the way). Instead of an irritating disconnect or distraction, it should be looked at as a chance to find out what’s really going on. . .
+ TrackBacks (0) | Category: Analytical Chemistry | Drug Assays | In Silico
September 3, 2008
It’s a truism that half of all advertising dollars are wasted, but that no one buying the ads can be sure which half it is. Advertising from the drug companies is ubiquitous: how much of that is doing them no good?
A recent study suggests that the widely reviled direct-to-consumer (DTC) campaigns may be in that category. A paper in the British Medical Journal looks at the cross-border effect of US-based advertising on English-speaking and French-speaking Canadians, on the reasonable assumption that the former group is more likely to pay attention. They picked products that had been on the market for at least a year before the ad campaigns started, and looked at the number of prescriptions among both groups once the ads started running. What they found was no effect on the prescriptions for Schering-Plough’s Nasonex (mometasone) and Wyeth and Amgen’s Enbrel (etanercept), both of which were heavily advertised. Novartis’s Zelnorm (tegaserod, now off the market) did show a 40% rise, which gradually went back down again.
A reasonable theory to explain these results starts by looking at the respective markets. In the case of the first two drugs, a number of different therapies were already available. But Zelnorm was pretty much the only thing available in Canada for irritable bowel syndrome. It could be that DTC ads are useful in letting patients know that there’s finally something for a disease that previously had few options, but less effective in pushing into a crowded area. There’s also the multiple-step barrier problem – seeing a general practitioner and then a specialist, and so on – which can mitigate the effect of advertising, depending on the drug.
But the people who would know these effects best aren’t talking: the marketing departments of the drug companies themselves. As I’ve pointed out here before, the whole purpose of advertising is to make money: if you don’t increase sales enough to more than cover the cost of the ads, you’re clearly wasting your time. And that’s why I don’t have a lot of patience with outraged comparisons of pharma R&D budgets to marketing budgets, because the latter are there to bring in even more money for the former.
If, though, some of these marketing campaigns really are wasted money, then clearly that spending needs to be redirected. And that’s what makes me wonder. No one keeps a closer eye on prescription trends than the companies that sell the drugs, and they’re in the best position to see if a given ad campaign is doing anything or not. Even allowing for the usual human quota of inertia and incompetence, it would seem that DTC campaigns must be doing something for the companies involved, at least in many cases, or they wouldn’t exist at all. It’s also worth keeping in mind that what they may be doing is not so much boosting the number of prescriptions written as keeping them from falling. In the case of the drugs in the BMJ study, you have to wonder if the normal trend would have been for the number of scripts to have declined, while the ad campaigns held them steady.
That can be hard to prove, of course, and no doubt there are some marketing strategies that have far outlived their usefulness on just that kind of reasoning. But overall, I have trouble believing that DTC campaigns are useless across the board. Some of the marketing folks are weasels, but they’re not dumb ones. (It's also important to remember that DTC ads are only 5 to 10% of the total amount spent on drug promotion, according to the figures I've seen). In the end, I can agree with this statement from the paper:
Until we better understand how direct to consumer advertising modifies prescribing for particular drugs, debates about its positive and negative consequences will continue to be based on conjecture rather than strong evidence.
+ TrackBacks (0) | Category: Business and Markets | Drug Prices